微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Spark with Yarn - 卡在 WARN cluster.YarnScheduler:初始作业未接受任何资源

如何解决Spark with Yarn - 卡在 WARN cluster.YarnScheduler:初始作业未接受任何资源

我正在尝试学习 Hadoop/Spark,为此我购买了 2 个 RaspBerryPI 4B+(4 核 cpu 和 2GB RAM)。我遵循了很多教程,已经运行了 MapReduce 作业,但是 Spark 每次都卡在这里WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

我正在尝试的命令是 spark-submit --class org.apache.spark.examples.SparkPi --master yarn /opt/spark/examples/jars/spark-examples*.jar 10

当我使用 yarn node -list 时,我得到了这个

2021-05-21 18:05:26,548 INFO client.RMProxy: Connecting to ResourceManager at raspBerrypi1/192.168.1.101:8032
Total Nodes:1
         Node-Id         Node-State Node-Http-Address   Number-of-Running-Containers
raspBerrypi2:35133          RUNNING raspBerrypi2:8042                              0

这是作业对我的工作节点说一些事情直到它卡住的部分

2021-05-21 17:41:17,015 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 192.168.1.102
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1621629646204
     final status: UNDEFINED
     tracking URL: http://raspBerrypi1:8088/proxy/application_1621629551274_0001/
     user: pi
2021-05-21 17:41:17,019 INFO cluster.YarnClientSchedulerBackend: Application application_1621629551274_0001 has started running.
2021-05-21 17:41:17,052 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41723.
2021-05-21 17:41:17,054 INFO netty.NettyBlockTransferService: Server created on raspBerrypi1:41723
2021-05-21 17:41:17,059 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2021-05-21 17:41:17,146 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver,raspBerrypi1,41723,None)
2021-05-21 17:41:17,156 INFO storage.BlockManagerMasterEndpoint: Registering block manager raspBerrypi1:41723 with 117.0 MB RAM,BlockManagerId(driver,178 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver,180 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver,844 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> raspBerrypi2,PROXY_URI_BASES -> http://raspBerrypi2:8088/proxy/application_1621629551274_0001),/proxy/application_1621629551274_0001
2021-05-21 17:41:17,884 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.
2021-05-21 17:41:17,906 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@18f1712{/metrics/json,null,AVAILABLE,@Spark}
2021-05-21 17:41:17,972 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
2021-05-21 17:41:18,255 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
2021-05-21 17:41:19,157 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
2021-05-21 17:41:19,241 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 10 output partitions
2021-05-21 17:41:19,242 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
2021-05-21 17:41:19,244 INFO scheduler.DAGScheduler: Parents of final stage: List()
2021-05-21 17:41:19,249 INFO scheduler.DAGScheduler: Missing parents: List()
2021-05-21 17:41:19,281 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34),which has no missing parents
2021-05-21 17:41:19,667 WARN util.SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
2021-05-21 17:41:19,706 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB,free 117.0 MB)
2021-05-21 17:41:19,835 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1381.0 B,842 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on raspBerrypi1:41723 (size: 1381.0 B,free: 117.0 MB)
2021-05-21 17:41:19,851 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1184
2021-05-21 17:41:19,922 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0,1,2,3,4,5,6,7,8,9))
2021-05-21 17:41:19,926 INFO cluster.YarnScheduler: Adding task set 0.0 with 10 tasks
2021-05-21 17:41:34,994 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

最后一条消息每 3 秒重复一次

spark job webUI executors页面显示我的master(raspBerrypi1),时间线只显示正在添加的驱动程序,然后Pi作业的reduce开始(但永远不会结束)。

我的 IP 是 192.168.1.101(主机名 raspBerrypi1)和 192.168.1.102(主机名 raspBerrypi2),正如我遵循的上一个教程所建议的那样。

多次更改我的配置文件增加或减少值但没有运气。 目前他们是:

spark-defaults.sh(仅在 master 上)

spark.master            yarn
spark.driver.memory     512m
spark.yarn.am.memory        512m
spark.executor.memory       512m
spark.executor.cores        2

yarn-site.xml(在两台电脑上)

<configuration>
        <property>
                <name>yarn.acl.enable</name>
                <value>0</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>raspBerrypi1</value>
        </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>0.0.0.0:8088</value>
    </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>1536</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-mb</name>
                <value>1536</value>
        </property>
        <property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>128</value>
        </property>
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
        <property>
                <name>yarn.nodemanager.vmem-check-enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
</configuration>

mapred-site.xml(在两台电脑上)

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
    <property>
            <name>yarn.app.mapreduce.am.env</name>
            <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
    </property>
    <property>
            <name>mapreduce.map.env</name>
            <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
    </property>
    <property>
            <name>mapreduce.reduce.env</name>
            <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
    </property>
        <property>
                <name>yarn.app.mapreduce.am.resource.mb</name>
                <value>512</value>
        </property>
        <property>
                <name>mapreduce.map.memory.mb</name>
                <value>256</value>
        </property>
        <property>
                <name>mapreduce.reduce.memory.mb</name>
                <value>256</value>
        </property>
</configuration>

.bashrc(在两台电脑上)

export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native:$LD_LIBRARY_PATH

我真的坚持了一个多星期,非常感谢帮助。谢谢。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。