如何解决气流访问 Dag 定义中的命令行参数
我正在尝试通过如下所示的 Dag 定义中的 rest API 访问传递给 Dag 的参数,我将 config_path 和 s3_bucket 作为 Rest API 中的参数传递,并希望在自定义 SparkLivyOperator
中捕获它们。 SparkLivyOperator
读取所有参数并在 EMR 上启动 Spark 作业。我尝试阅读以下这些参数,但我没有得到任何值。
下面是我的 curl 命令:
curl -X POST \
http://localhost:8080/api/experimental/dags/**spark_launcher**/dag_runs \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{"conf":"{\"s3_bucket\":\"--s3_bucket s3://test_bucket/\",\"config_path\":\"--config_path this_is_conf\"}"}'
config_path='{{ dag_run.conf["config_path"] }}',s3_bucket='{{ dag_run.conf["s3_bucket"] }}
以下是我的 Dag 定义:
import os
from datetime import datetime
from airflow import DAG
from EmrManagerOperator import EmrManagerOperator
DEFAULT_ARGS = {
'owner': 'hadoop','depends_on_past': False,'start_date': datetime(2021,8,1,0),'email_on_failure': True,'email_on_retry': False,'schedule_interval': None,'retries': 2
}
DOCKER_RELEASE_VERSION = '0.1.0-38' # this python3 emr-5.23.0 release
STAGE = os.environ['STAGE']
ENV = STAGE.lower()
REGION = 'na'
COUNTRIES = 'US'
DAG_UUID = 'TestDag'
# EMR EC2 instance related variables
EMR_RELEASE_VERSION = 'emr-5.29.0'
EC2_INSTANCE_TYPE = 'r4.4xlarge'
EC2_INSTANCE_COUNT = '3'
EC2_INSTANCE_VOLUME_SIZE = '500'
DOCKER_IMAGE = '833176741232.dkr.ecr.us-east-1.amazonaws.com/emr-manager:' + DOCKER_RELEASE_VERSION
BOOTSTRAP_SCRIPT = 's3://bucket/scripts/install-basic-python-aws-cli-libs.sh'
def get_emr_id(context):
"""
Get EMR cluster ID from context
:param context: context of instance of task
:return: emr_id: str cluster_id of emr cluster
"""
emr_info = context['task_instance'].xcom_pull(task_ids='emr-create-{dag_id}'.format(dag_id=DAG_UUID))
print(emr_info)
return emr_info["cluster_id"]
def get_emr_dns(context):
"""
Function to get emr dns
:param context: airflow context
:return: str: emr dns
"""
emr_info = context['task_instance'].xcom_pull(task_ids='emr-create-{dag_id}'.format(dag_id=DAG_UUID))
print(emr_info)
return emr_info["emr_master_dns"]
with DAG(dag_id=DAG_UUID,default_args=DEFAULT_ARGS,schedule_interval=None,max_active_runs=10) as dag:
emr_manager_create_task = EmrManagerOperator(
dag=dag,job_name='emr-create-{dag_id}'.format(dag_id=DAG_UUID),region=REGION,image=DOCKER_IMAGE,emr_action='create_emr',emr_instance_profile="EMR-InstanceRole",emr_cluster_name="emr-on-demand-cluster-na",emr_release_label=EMR_RELEASE_VERSION,emr_node_instance_type=EC2_INSTANCE_TYPE,emr_master_instance_type=EC2_INSTANCE_TYPE,emr_bootstrap_script_path=BOOTSTRAP_SCRIPT,emr_node_volume_size=EC2_INSTANCE_VOLUME_SIZE,emr_node_on_demand_count=EC2_INSTANCE_COUNT,project='di2-etl',env=ENV)
segment_release_task = SparkLivyOperator(
dag=dag,jobName=DAG_UUID,task_id='livy_operator_task.' + DAG_UUID,get_emr_dns=get_emr_dns,env=ENV,stage=STAGE,**config_path='{{ dag_run.conf["config_path"] }}',s3_bucket='{{ dag_run.conf["s3_bucket"] }}'**
)
emr_manager_delete_task = EmrManagerOperator(
dag=dag,job_name='emr-delete-{dag_id}'.format(dag_id=DAG_UUID),emr_action='delete_emr',get_emr_id=get_emr_id,trigger_rule="all_done",env=ENV
)
emr_manager_create_task >> segment_release_task >> emr_manager_delete_task
解决方法
我相信您没有将参数标记为“模板化”参数。当您定义自定义运算符时,您可以将字段添加到“template_fields”静态字段:
template_fields = ['s3_bucket','config_path']
JINJA 模板仅处理添加到 template_fields 的字段。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。