Apache Airflow:执行者报告任务实例完成(失败),尽管任务说它已排队
Apache Airflow: Executor reports task instance finished (failed) although the task says its queued
我们的气流安装使用的是 CeleryExecutor。
并发配置是
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 16
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 64
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
我们有一个每天执行的 dag。它有一些并行的任务,遵循一种模式,感知数据是否存在于 hdfs 中,然后休眠 10 分钟,最后上传到 s3。
部分任务遇到以下错误:
2019-05-12 00:00:46,212 ERROR - Executor reports task instance <TaskInstance: example_dag.task1 2019-05-11 04:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally?
2019-05-12 00:00:46,558 INFO - Marking task as UP_FOR_RETRY
2019-05-12 00:00:46,561 WARNING - section/key [smtp/smtp_user] not found in config
这种错误随机出现在那些任务中。发生此错误时,任务实例的状态立即设置为 up_for_retry,并且工作节点中没有日志。经过一些重试,他们最终执行并完成。
这个问题有时会给我们带来很大的 ETL 延迟。有人知道如何解决这个问题吗?
我在 DagRuns 中看到了非常相似的症状。我认为这是由于 ExternalTaskSensor 和并发问题导致的排队和终止任务语言,看起来像这样:Executor reports task instance <TaskInstance: dag1.data_table_temp_redshift_load 2019-05-20 08:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally?
但是当我查看工作日志时,我看到有一个错误是由设置变量引起的 Variable.set
在我的 DAG 中。该问题在此处描述 ,其中调度程序定期轮询 dagbag 以动态刷新任何更改。每次心跳的错误都会导致严重的 ETL 延迟。
您是否在 wh_hdfs_to_s3 DAG(或其他)中执行任何可能导致错误或延迟/这些症状的逻辑?
我们遇到了类似的问题,已由
解决
"-x, --donot_pickle"
选项。
我们已经解决了这个问题。让我回答我自己的问题:
我们有 5 个气流工作节点。安装flower后,监控任务分发到这些节点。我们发现失败的任务总是被发送到特定的节点。我们尝试使用 airflow test 命令 运行 其他节点中的任务并且它们起作用了。最终,原因是该特定节点中的错误 python 包。
我们的气流安装使用的是 CeleryExecutor。 并发配置是
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 16
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 64
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
我们有一个每天执行的 dag。它有一些并行的任务,遵循一种模式,感知数据是否存在于 hdfs 中,然后休眠 10 分钟,最后上传到 s3。
部分任务遇到以下错误:
2019-05-12 00:00:46,212 ERROR - Executor reports task instance <TaskInstance: example_dag.task1 2019-05-11 04:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally?
2019-05-12 00:00:46,558 INFO - Marking task as UP_FOR_RETRY
2019-05-12 00:00:46,561 WARNING - section/key [smtp/smtp_user] not found in config
这种错误随机出现在那些任务中。发生此错误时,任务实例的状态立即设置为 up_for_retry,并且工作节点中没有日志。经过一些重试,他们最终执行并完成。
这个问题有时会给我们带来很大的 ETL 延迟。有人知道如何解决这个问题吗?
我在 DagRuns 中看到了非常相似的症状。我认为这是由于 ExternalTaskSensor 和并发问题导致的排队和终止任务语言,看起来像这样:Executor reports task instance <TaskInstance: dag1.data_table_temp_redshift_load 2019-05-20 08:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally?
但是当我查看工作日志时,我看到有一个错误是由设置变量引起的 Variable.set
在我的 DAG 中。该问题在此处描述
您是否在 wh_hdfs_to_s3 DAG(或其他)中执行任何可能导致错误或延迟/这些症状的逻辑?
我们遇到了类似的问题,已由
解决"-x, --donot_pickle"
选项。
我们已经解决了这个问题。让我回答我自己的问题:
我们有 5 个气流工作节点。安装flower后,监控任务分发到这些节点。我们发现失败的任务总是被发送到特定的节点。我们尝试使用 airflow test 命令 运行 其他节点中的任务并且它们起作用了。最终,原因是该特定节点中的错误 python 包。