如何解决Airflow 2 示例 SubDag 在 SequentialExecutor
当我运行 Airflow SubDag 示例时,第一个 SubDag 陷入运行状态。第一个 SubDag 中的不同任务实例具有 no_status 并且整个 DAG 不会进一步发展。我认为我的配置一定有问题,但我无法弄清楚问题是什么。
[core]
dags_folder = /home/airflow/dags
hostname_callable = socket.getfqdn
default_timezone = system
executor = SequentialExecutor
sql_alchemy_conn = MysqL+MysqLdb://****************@localhost:3306/airflow?charset=utf8mb4
sql_engine_encoding = utf-8
sql_alchemy_pool_enabled = True
sql_alchemy_pool_size = 5
sql_alchemy_max_overflow = 10
sql_alchemy_pool_recycle = 3600
sql_alchemy_pool_pre_ping = True
sql_alchemy_schema =
parallelism = 32
dag_concurrency = 16
dags_are_paused_at_creation = True
max_active_runs_per_dag = 16
load_examples = False
load_default_connections = False
plugins_folder = /home/airflow/plugins
execute_tasks_new_python_interpreter = True
fernet_key = ****************************
donot_pickle = False
dagbag_import_timeout = 30
dagbag_import_error_tracebacks = True
dagbag_import_error_traceback_depth = 2
dag_file_processor_timeout = 50
task_runner = StandardTaskRunner
default_impersonation =
security =
unit_test_mode = False
enable_xcom_pickling = True
killed_task_cleanup_time = 60
dag_run_conf_overrides_params = True
dag_discovery_safe_mode = True
default_task_retries = 1
min_serialized_dag_update_interval = 30
min_serialized_dag_fetch_interval = 10
max_num_rendered_ti_fields_per_task = 30
check_slas = False
xcom_backend = airflow.models.xcom.BaseXCom
lazy_load_plugins = True
lazy_discover_providers = True
max_db_retries = 3
remote_log_conn_id =
encrypt_s3_logs = False
non_pooled_task_slot_count = 128
[scheduler]
job_heartbeat_sec = 5
clean_tis_without_dagrun_interval = 15.0
scheduler_heartbeat_sec = 5
num_runs = -1
processor_poll_interval = 1
min_file_process_interval = 30
dag_dir_list_interval = 300
print_stats_interval = 300
pool_metrics_interval = 5.0
scheduler_health_check_threshold = 30
orphaned_tasks_check_interval = 300.0
child_process_log_directory = /home/airflow/logs/scheduler
scheduler_zombie_task_threshold = 300
catchup_by_default = False
max_tis_per_query = 0
use_row_level_locking = True
parsing_processes = 2
use_job_schedule = True
allow_trigger_in_future = False
run_duration = -1
authenticate = False
max_dagruns_to_create_per_loop = 100
max_dagruns_per_loop_to_schedule = 100
schedule_after_task_execution = True
这是配置的 [core] 和 [scheduler] 部分。如果需要其他任何东西来解决这个问题,我很乐意提供它。在从气流 1.10.0 稍微粗略地切换到气流 2.0.2 后出现错误
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。