微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

线程“main”中的异常java.lang.NoClassDefFoundError:intellij中spark scala应用程序中的org/apache/spark/sql/catalyst/StructFilters

如何解决线程“main”中的异常java.lang.NoClassDefFoundError:intellij中spark scala应用程序中的org/apache/spark/sql/catalyst/StructFilters

我的 Pom.xml

 <dependencies>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.4</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.specs</groupId>
      <artifactId>specs</artifactId>
      <version>1.2.5</version>
      <scope>test</scope>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_2.12</artifactId>
      <version>3.0.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.12</artifactId>
      <version>3.0.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.databricks/spark-xml -->
    <dependency>
      <groupId>com.databricks</groupId>
      <artifactId>spark-xml_2.12</artifactId>
      <version>0.10.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/MysqL/mysql-connector-java -->
    <dependency>
      <groupId>MysqL</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.29</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk</artifactId>
      <version>1.11.985</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-s3 -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
      <version>1.11.985</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-core -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-core</artifactId>
      <version>1.11.985</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-dynamodb -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-dynamodb</artifactId>
      <version>1.11.985</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-cloudwatch -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-cloudwatch</artifactId>
      <version>1.11.985</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-kinesis -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-kinesis</artifactId>
      <version>1.11.985</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-avro -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-avro_2.12</artifactId>
      <version>3.1.1</version>
    </dependency>

我想读取avro文件

 val conf = new SparkConf().setAppName("Nightmare").setMaster("local")
val sc = new SparkContext(conf)
sc.setLogLevel("Error")
val spark= SparkSession.builder().getorCreate()
import spark.implicits._
//Step 1-2 read avro file
println("Step 1-2")
val df1 = spark.read
  .format("com.databricks.spark.avro")
  .option("multiline","true")
  .load("file:///D:/bigdata_tasks/nightmare.avro")
//step 3-4 Hit the url --- Convert it to dataframe -- https://randomuser.me/api/0.8/?results=1000 - df2
println("Step 3-5")
val html =Source.fromURL(" https://randomuser.me/api/0.8/?results=1000")
val rdddata=html.mkString
//convert string to rdd
val paralleldata=sc.parallelize(List(rdddata))
val df2= spark.read.json(paralleldata)
df2.printSchema()
df2.show()

运行后出现异常:

线程“main”中的异常 java.lang.NoClassDefFoundError: org/apache/spark/sql/catalyst/StructFilters

我也尝试过以下代码

      val df1 = spark.read
      .format("avro")
      .option("multiline","true")
      .load("file:///D:/bigdata_tasks/nightmare.avro")

但仍然存在相同的例外情况。我的火花版本是 2.12。我应该更新 spark 版本吗?

解决方法

很可能是由于混淆了 Spark 版本造成的 - 您的 Avro 库来自 Spark 3.1.1,而 Spark 的核心来自 3.0.1(通常最好将版本声明为属性,这样您就可以拥有一个版本适用于所有组件)。此外,删除不必要的依赖项,例如 spark-xml、aws sdk 等。

还要检查 Scala 版本

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。