等待 AWS Glue 爬网程序完成运行

dru*_*rum 3 boto3 aws-glue

在文档中,我找不到任何检查爬虫运行状态的方法。我目前做的唯一方法是不断检查 AWS 以检查文件/表是否已创建。

有没有更好的方法来阻止直到爬虫完成它的运行?

Acu*_*nus 9

以下函数使用boto3. 它启动 AWS Glue 爬网程序并等待其完成。它还在进行时记录状态。它使用 Python v3.8 和 boto3 v1.17.3 进行了测试。

import logging
import time
import timeit

import boto3

log = logging.getLogger(__name__)


def run_crawler(crawler: str, *, timeout_minutes: int = 120, retry_seconds: int = 5) -> None:
    """Run the specified AWS Glue crawler, waiting until completion."""
    # Ref: https://stackoverflow.com/a/66072347/
    timeout_seconds = timeout_minutes * 60
    client = boto3.client("glue")
    start_time = timeit.default_timer()
    abort_time = start_time + timeout_seconds

    def wait_until_ready() -> None:
        state_previous = None
        while True:
            response_get = client.get_crawler(Name=crawler)
            state = response_get["Crawler"]["State"]
            if state != state_previous:
                log.info(f"Crawler {crawler} is {state.lower()}.")
                state_previous = state
            if state == "READY":  # Other known states: RUNNING, STOPPING
                return
            if timeit.default_timer() > abort_time:
                raise TimeoutError(f"Failed to crawl {crawler}. The allocated time of {timeout_minutes:,} minutes has elapsed.")
            time.sleep(retry_seconds)

    wait_until_ready()
    response_start = client.start_crawler(Name=crawler)
    assert response_start["ResponseMetadata"]["HTTPStatusCode"] == 200
    log.info(f"Crawling {crawler}.")
    wait_until_ready()
    log.info(f"Crawled {crawler}.")
Run Code Online (Sandbox Code Playgroud)

可选奖励:使用一些合理的默认值创建或更新 AWS Glue 爬网程序的函数:

def ensure_crawler(**kwargs: Any) -> None:
    """Ensure that the specified AWS Glue crawler exists with the given configuration.

    At minimum the `Name` and `Targets` keyword arguments are required.
    """
    # Use defaults
    assert all(kwargs.get(k) for k in ("Name", "Targets"))
    defaults = {
        "Role": "AWSGlueRole",
        "DatabaseName": kwargs["Name"],
        "SchemaChangePolicy": {"UpdateBehavior": "UPDATE_IN_DATABASE", "DeleteBehavior": "DELETE_FROM_DATABASE"},
        "RecrawlPolicy": {"RecrawlBehavior": "CRAWL_EVERYTHING"},
        "LineageConfiguration": {"CrawlerLineageSettings": "DISABLE"},
    }
    kwargs = {**defaults, **kwargs}

    # Ensure crawler
    client = boto3.client("glue")
    name = kwargs["Name"]
    try:
        response = client.create_crawler(**kwargs)
        log.info(f"Created crawler {name}.")
    except client.exceptions.AlreadyExistsException:
        response = client.update_crawler(**kwargs)
        log.info(f"Updated crawler {name}.")
    assert response["ResponseMetadata"]["HTTPStatusCode"] == 200
Run Code Online (Sandbox Code Playgroud)


j.b*_*ski 1

您可以使用 boto3 (或类似的)来做到这一点。有 get_crawler 方法。您可以在“LastCrawl”部分找到所需的信息

https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue.html#Glue.Client.get_crawler