微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

线程“main”中的异常 java.io.IOException:作业失败

如何解决线程“main”中的异常 java.io.IOException:作业失败

我是 Hadoop 环境的新手,刚尝试过 WordCount 程序,但每次都遇到类似的错误

3732814800_0004 由于 AM Container 失败 2 次 appattempt_1623732814800_0004_000002 退出退出代码:1 失败 这次尝试。诊断:[2021-06-15 10:45:16.241]异常来自 容器启动。容器编号:container_1623732814800_0004_02_000001 退出代码:1 异常消息:CreateSymbolicLink 错误 (1314):A 客户端不拥有所需的权限。

外壳输出:移动了 1 个文件。 “设置环境变量” “设置作业资源”

[2021-06-15 10:45:16.244]容器以非零退出代码 1 退出。 [2021-06-15 10:45:16.245]容器以非零退出代码 1 退出。 有关更详细的输出,请查看应用程序跟踪页面: http://DESKTOP-22LEODT:8088/cluster/app/application_1623732814800_0004 然后单击指向每次尝试日志的链接。 .失败 应用。 2021-06-15 10:45:17,267 INFO mapreduce.Job:计数器:0 线程“main”中的异常 java.io.IOException: 作业失败!

所以我到现在为止做了什么 我刚刚为 Hadoop 字数统计程序创建了一个 Scala 类,并从中创建了一个 jar 文件

package piyush.jiwane.hadoop

import java.io.IOException
import java.util._
import scala.collection.JavaConversions._
import org.apache.hadoop.fs.Path
import org.apache.hadoop.conf._
import org.apache.hadoop.io._
import org.apache.hadoop.mapred._
import org.apache.hadoop.util._

object WordCount {
  class Map extends MapReduceBase with Mapper[LongWritable,Text,IntWritable] {
    private final val one = new IntWritable(1)
    private val word = new Text()

    @throws[IOException]
    def map(key: LongWritable,value: Text,output: OutputCollector[Text,IntWritable],reporter: Reporter) {
      val line: String = value.toString
      line.split(" ").foreach { token =>
        word.set(token)
        output.collect(word,one)
      }
    }
  }

  class Reduce extends MapReduceBase with Reducer[Text,IntWritable,IntWritable] {
    @throws[IOException]
    def reduce(key: Text,values: Iterator[IntWritable],reporter: Reporter) {
      val sum = values.toList.reduce((valueOne,valueTwo) => new IntWritable(valueOne.get() + valueTwo.get()))
      output.collect(key,new IntWritable(sum.get()))
    }
  }

  @throws[Exception]
  def main(args: Array[String]) {
    val conf: JobConf = new JobConf(this.getClass)
    conf.setJobName("WordCountScala")
    conf.setoutputKeyClass(classOf[Text])
    conf.setoutputValueClass(classOf[IntWritable])
    conf.setMapperClass(classOf[Map])
    conf.setCombinerClass(classOf[Reduce])
    conf.setReducerClass(classOf[Reduce])
    conf.setInputFormat(classOf[TextInputFormat])
    conf.setoutputFormat(classOf[textoutputFormat[Text,IntWritable]])
    FileInputFormat.setInputPaths(conf,new Path(args(1)))
    FileOutputFormat.setoutputPath(conf,new Path(args(2)))
    JobClient.runJob(conf)
  }
}

hadoop 程序的输入文件

11 23 45 17 23 45 88 15 24 26 85 96 44 52 10 15 55 84 58 62 78 98 84

对于参考我已经按照以下链接

https://blog.knoldus.com/hadoop-word-count-program-in-scala/ ======&&=== https://www.logicplay.club/how-to-run-hadoop-wordcount-mapreduce-example-on-windows-10/ ======&&====== org.apache.hadoop.mapred.FileAlreadyExistsException

请帮助理解这一点 提前致谢

编辑 在cmd管理模式下运行程序后,我得到了这样的输出

2021-06-15 11:42:46,421 INFO mapreduce.Job:  map 0% reduce 0% 2021-06-15 11:43:01,282 INFO mapreduce.Job: Task Id : attempt_1623737359186_0001_m_000000_0,Status : Failed Error: java.lang.classNotFoundException: scala.collection.mutable.ArrayOps$ofRef
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.classLoader.loadClass(ClassLoader.java:521)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:20)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:13)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

2021-06-15 11:43:02,430 INFO mapreduce.Job: map 50% reduce 0% 2021-06-15 11:43:13,872 INFO mapreduce.Job: Task Id : attempt_1623737359186_0001_m_000000_1,Status : Failed Error: java.lang.classNotFoundException: scala.collection.mutable.ArrayOps$ofRef
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.classLoader.loadClass(ClassLoader.java:521)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:20)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:13)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) 2021-06-15 11:43:27,532 INFO mapreduce.Job:  map 50% reduce 17% 2021-06-15 11:43:27,562 INFO mapreduce.Job: Task Id : attempt_1623737359186_0001_m_000000_2,Status : Failed Error: java.lang.classNotFoundException: scala.collection.mutable.ArrayOps$ofRef
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.classLoader.loadClass(ClassLoader.java:521)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:20)
        at piyush.jiwane.hadoop.WordCount$Map.map(WordCount.scala:13)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1762)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

2021-06-15 11:43:28,600 INFO mapreduce.Job:  map 100% reduce 0% 2021-06-15 11:43:29,628 INFO mapreduce.Job:  map 100% reduce 100% 2021-06-15 11:43:30,669 INFO mapreduce.Job: Job job_1623737359186_0001 Failed with state Failed due to: Task Failed task_1623737359186_0001_m_000000 Job Failed as tasks Failed. FailedMaps:1 FailedReduces:0 killedMaps:0 killedReduces: 0

2021-06-15 11:43:30,877 INFO mapreduce.Job: Counters: 43
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=236028
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=130
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=3
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
                HDFS: Number of bytes read erasure-coded=0
        Job Counters
                Failed map tasks=4
                Killed reduce tasks=1
                Launched map tasks=5
                Launched reduce tasks=1
                Other local map tasks=3
                Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=64773
                Total time spent by all reduces in occupied slots (ms)=24081
                Total time spent by all map tasks (ms)=64773
                Total time spent by all reduce tasks (ms)=24081
                Total vcore-milliseconds taken by all map tasks=64773
                Total vcore-milliseconds taken by all reduce tasks=24081
                Total megabyte-milliseconds taken by all map tasks=66327552
                Total megabyte-milliseconds taken by all reduce tasks=24658944
        Map-Reduce Framework
                Map input records=0
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=96
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=19
                cpu time spent (ms)=873
                Physical memory (bytes) snapshot=250834944
                Virtual memory (bytes) snapshot=390701056 Total committed heap usage (bytes)=240123904
                Peak Map Physical memory (bytes)=250834944
                Peak Map Virtual memory (bytes)=390774784
        File Input Format Counters
                Bytes Read=34 Exception in thread "main" java.io.IOException: Job Failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:876)
        at piyush.jiwane.hadoop.WordCount$.main(WordCount.scala:48)
        at piyush.jiwane.hadoop.WordCount.main(WordCount.scala)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

解决方法

最近的错误“java.lang.ClassNotFoundException: scala.collection.mutable.ArrayOps$ofRef”来自于使用不兼容的 scala 版本。您使用的是 2.13.x 或更高版本,请务必降级到 2.12.x 或更高版本。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其他元素将获得点击?
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。)
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbcDriver发生异常。为什么?
这是用Java进行XML解析的最佳库。
Java的PriorityQueue的内置迭代器不会以任何特定顺序遍历数据结构。为什么?
如何在Java中聆听按键时移动图像。
Java“Program to an interface”。这是什么意思?