微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Spark2.3.2源码解析: 4.3.Yarn cluster 模式 Executor 启动源码 分析

 

文章与前两篇文章有衔接性, 想知道为什么从此处代码开始的话,请查阅前两篇文章:

 

Spark2.3.2源码解析: 4.1.Yarn cluster 模式 SparkSubmit源码分析(一)

https://blog.csdn.net/zhanglong_4444/article/details/84875818

 

Spark2.3.2源码解析: 4.2.Yarn cluster 模式 SparkSubmit源码分析(二)ApplicationMaster

https://blog.csdn.net/zhanglong_4444/article/details/85064735

standalone模式: 

 

 

yarn-client 模式架构图:

 

 

yarn-cluster 模式架构图:

开始正题:

org.apache.spark.deploy.yarn.ApplicationMaster#registeraM

 

 

 

org.apache.spark.deploy.yarn.YarnAllocator#allocateResources

这个是资源分配的核心:

 

 

处理 RM 给与的 containers . 启动 executors

org.apache.spark.deploy.yarn.YarnAllocator#handleAllocatedContainers

主要是: 

runAllocatedContainers(containersToUse)

 

 

org.apache.spark.deploy.yarn.YarnAllocator#runAllocatedContainers

在已分配的容器中启动执行程序。

 


 

new ExecutorRunnable(
  Some(container),
  conf,
  sparkConf,
  driverUrl,
  executorId,
  executorHostname,
  executorMemory,
  executorCores,
  appAttemptId.getApplicationId.toString,
  securityMgr,
  localResources
)
.run()

 

直接看: 

org.apache.spark.deploy.yarn.ExecutorRunnable#run

 

在这里启动 container

org.apache.spark.deploy.yarn.ExecutorRunnable#startContainer


 

核心: val commands = prepareCommand()

 

主要是这个:

org.apache.spark.deploy.yarn.ExecutorRunnable#prepareCommand

 

 

以 wordcount 代码举例 debug 之后. commands 中的参数为:

0 = "{{JAVA_HOME}}/bin/java"
1 = "-server"
2 = "-Xmx1024m"
3 = "-Djava.io.tmpdir={{PWD}}/tmp"
4 = "-Dspark.history.ui.port=18081" 
5 = "-Dspark.driver.port=45169"
6 = "-Dspark.yarn.app.container.log.dir=<LOG_DIR>"
7 = "-XX:OnOutOfMemoryError='kill %p'"
8 = "org.apache.spark.executor.CoarseGrainedExecutorBackend"
9 = "--driver-url"
10 = "spark://CoarseGrainedScheduler@bj-rack001-hadoop006:45169"
11 = "--executor-id"
12 = "<executorId>"
13 = "--hostname"
14 = "<hostname>"
15 = "--cores"
16 = "1"
17 = "--app-id"
18 = "application_1549095347526_1788"
19 = "--user-class-path"
20 = "file:$PWD/__app__.jar"
21 = "1><LOG_DIR>/stdout"
22 = "2><LOG_DIR>/stderr"

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

所以executor 启动类是:

org.apache.spark.executor.CoarseGrainedExecutorBackend

 

这个在 hadoo yarn 里面的日志里有打印启动的参数:

 

 

 

 

 

 

今天先到这,稍后更新................

 

 

 

 

 

 

 

 

 

 

 

 

 

 

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐