如何解决得到错误“用户类引发异常:org.apache.spark.SparkException:作业中止”使用Scala运行Spark工作时
我已经安排了动态执行者进行每日火花工作。这项工作在某些情况下可以正常运行,而在某些情况下会随机失败而无需更改配置。我正在查看日志,但找不到任何特定的内容。 我正在使用以下配置:
/usr/hdp/2.6.0.3-8/spark2/bin/spark-submit --master yarn --deploy-mode cluster --driver-memory 30G --driver-cores 5 --executor-cores 4 --num-executors 30 --executor-memory 10G --conf spark.sql.files.ignoreCorruptFiles=true --conf spark.driver.maxResultSize=0 --conf spark.yarn.executor.memoryOverhead=4096 --conf spark.shuffle.service.enabled=True --conf spark.dynamicAllocation.enabled=True --conf spark.dynamicAllocation.minExecutors=30 --conf spark.dynamicAllocation.maxExecutors=80
这是我可以在日志中找到的东西:
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1455.0 in stage 8.0 (TID 43397)
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1391.0 in stage 8.0 (TID 43293)
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1419.0 in stage 8.0 (TID 43307)
20/10/01 09:06:40 INFO Executor: Executor is trying to kill task 1440.0 in stage 8.0 (TID 43355)
20/10/01 09:06:40 INFO Executor: Executor killed task 1440.0 in stage 8.0 (TID 43355)
20/10/01 09:06:40 INFO Executor: Executor killed task 1419.0 in stage 8.0 (TID 43307)
20/10/01 09:06:40 INFO Executor: Executor killed task 1391.0 in stage 8.0 (TID 43293)
20/10/01 09:06:40 INFO Executor: Executor killed task 1455.0 in stage 8.0 (TID 43397)
20/10/01 09:06:41 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
20/10/01 09:06:41 INFO MemoryStore: MemoryStore cleared
20/10/01 09:06:41 INFO BlockManager: BlockManager stopped
20/10/01 09:06:41 INFO ShutdownHookManager: Shutdown hook called
End of LogType:stderr
有人可以在这里找到实际原因吗?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。