左半连接后数据框方法 count() 和 show() 不起作用Spark/Scala

如何解决左半连接后数据框方法 count() 和 show() 不起作用Spark/Scala

我正在尝试使用 Spark/Scala 实现 NLP-Pipeline。

现在我面临着从另一个集合中减去一个集合(实现为数据帧)的困难 - 两个集合项都有 ID,但两个集合中与该 ID 关联的参数数量不同。

示例:

集合 A 中的条目:"_id" -> "someUniqueID","attribute1" -> "someValue"

集合 B 中的条目:"_id" -> "someUniqueID","attribute1" -> "someValue","attribute2" ->"someValue"

我试图通过使用:

collection_A.join(collection_B,Seq("_id"),jointype="left_semi")

这样做后,我不能使用像

这样的方法
.show()
.count()

但是

.printSchema()

工作,并且产生的结构是所需的。

调用上面提到的任何一个方法都会导致下面列出的错误日志:

Exception in thread "main" java.lang.AbstractMethodError
 at scala.collection.TraversableLike$class.filter(TraversableLike.scala:270)
 at org.apache.spark.sql.catalyst.expressions.ExpressionSet.filter(ExpressionSet.scala:55)
 at org.apache.spark.sql.catalyst.plans.logical.QueryPlanConstraints$class.constraints(QueryPlanConstraints.scala:36)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.constraints$lzycompute(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.constraints(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints$.org$apache$spark$sql$catalyst$optimizer$InferFiltersFromConstraints$$getAllConstraints(Optimizer.scala:805)
 at org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints$$anonfun$inferFilters$1.applyOrElse(Optimizer.scala:780)
 at org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints$$anonfun$inferFilters$1.applyOrElse(Optimizer.scala:765)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:258)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:258)
 at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
 at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:257)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:328)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:186)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:326)
 at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:328)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:186)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:326)
 at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:328)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:186)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:326)
 at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:263)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:247)
 at org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints$.inferFilters(Optimizer.scala:765)
 at org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints$.apply(Optimizer.scala:759)
 at org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints$.apply(Optimizer.scala:754)
 at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
 at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
 at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
 at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
 at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:35)
 at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
 at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
 at scala.collection.immutable.List.foreach(List.scala:381)
 at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
 at org.apache.spark.sql.execution.QueryExecution.optimizedplan$lzycompute(QueryExecution.scala:67)
 at org.apache.spark.sql.execution.QueryExecution.optimizedplan(QueryExecution.scala:67)
 at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:73)
 at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:69)
 at org.apache.spark.sql.execution.QueryExecution.executedplan$lzycompute(QueryExecution.scala:78)
 at org.apache.spark.sql.execution.QueryExecution.executedplan(QueryExecution.scala:78)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3365)
 at org.apache.spark.sql.Dataset.head(Dataset.scala:2550)
 at org.apache.spark.sql.Dataset.take(Dataset.scala:2764)
 at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
 at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
 at org.apache.spark.sql.Dataset.show(Dataset.scala:751)
 at org.apache.spark.sql.Dataset.show(Dataset.scala:710)
 at org.apache.spark.sql.Dataset.show(Dataset.scala:719)
 at App$.main(App.scala:49)
 at App.main(App.scala)

我非常感谢有关此错误的任何帮助/提示

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其他元素将获得点击?
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。)
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbcDriver发生异常。为什么?
这是用Java进行XML解析的最佳库。
Java的PriorityQueue的内置迭代器不会以任何特定顺序遍历数据结构。为什么?
如何在Java中聆听按键时移动图像。
Java“Program to an interface”。这是什么意思?
Java在半透明框架/面板/组件上重新绘画。
Java“ Class.forName()”和“ Class.forName()。newInstance()”之间有什么区别?
在此环境中不提供编译器。也许是在JRE而不是JDK上运行?
Java用相同的方法在一个类中实现两个接口。哪种接口方法被覆盖?
Java 什么是Runtime.getRuntime()。totalMemory()和freeMemory()?
java.library.path中的java.lang.UnsatisfiedLinkError否*****。dll
JavaFX“位置是必需的。” 即使在同一包装中
Java 导入两个具有相同名称的类。怎么处理?
Java 是否应该在HttpServletResponse.getOutputStream()/。getWriter()上调用.close()?
Java RegEx元字符(。)和普通点?