Apache Airflow:Executor 报告任务实例已完成(失败),尽管该任务表示已排队

ali*_*uya 12 executor airflow

我们的气流安装使用 CeleryExecutor。并发配置是

# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 16

# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16

# Are DAGs paused by default at creation
dags_are_paused_at_creation = True

# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 64

# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above

# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor

# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16

Run Code Online (Sandbox Code Playgroud)

我们有一个每天执行的 dag。它遵循一种模式并行执行一些任务,即检测数据是否存在于 hdfs 中,然后休眠 10 分钟,最后上传到 s3。

一些任务遇到了以下错误:

2019-05-12 00:00:46,212 ERROR - Executor reports task instance <TaskInstance: example_dag.task1 2019-05-11 04:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally?
2019-05-12 00:00:46,558 INFO - Marking task as UP_FOR_RETRY
2019-05-12 00:00:46,561 WARNING - section/key [smtp/smtp_user] not found in config
Run Code Online (Sandbox Code Playgroud)

这种错误在这些任务中随机发生。发生此错误时,任务实例的状态立即设置为 up_for_retry,并且工作节点中没有日志。经过一些重试,它们最终执行并完成。

这个问题有时会给我们带来很大的 ETL 延迟。有谁知道如何解决这个问题?

Dee*_*Ram 9

我们遇到了类似的问题,已通过

"-x, --donot_pickle" 选项。

欲了解更多信息:- https://airflow.apache.org/cli.html#backfill

  • 为什么您认为这解决了问题?我正在尝试更好地理解这个问题,我们也正在经历它。 (2认同)

ali*_*uya 3

我们已经解决了这个问题。让我回答自己的问题:

我们有 5 个气流工作节点。安装flower后可以监控分发到这些节点的任务。我们发现失败的任务总是被发送到特定的节点。我们尝试使用气流测试命令在其他节点中运行该任务,并且它们有效。最终,原因是该特定节点中的 python 包错误。