如何解决PySpark-加入两个RDD-无法加入-太多值无法打包
我在HDFS中有两个文件(非常简单):
测试:
1,Team1
2,Team2
3,Team3
test2:
11,Player1,Team1
22,Team2
32,Team3
我想加入他们(按“团队*”列)以获取以下输出:
Team1,1,11,Player1
Team3,3,32,Player1
为此,我正在使用以下代码:
test = sc.textFile("/user/cloudera/Tests/test")
test_filter = test.filter(lambda a: a.split(",")[1].upper() == "TEAM1" or a.split(",")[1].upper() == "TEAM2")
test_map = test_filter.map(lambda a: a.upper())
test_map = test_map.map(lambda a: (a.split(",")[1]))
for i in test_map.collect(): print(i)
test2=sc.textFile("/user/cloudera/Tests/test2")
test2_map = test2.map(lambda a: a.upper())
test2_map = test2_map.map(lambda a: (a.split(",")[2],a.split(",")[1]))
for i in test2_map.collect(): print(i)
test_join = test_map.join(test2_map)
for i in test_join.collect(): print(i)
但是当我尝试查看联接RDD时,出现以下错误:
File "/usr/lib/spark/python/pyspark/rdd.py",line 1807,in <lambda>
map_values_fn = lambda (k,v): (k,f(v))
ValueError: too many values to unpack
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:135)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:176)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
我做错了什么?
谢谢!
解决方法
是否可以显示以下两个语句的结果集: 对于我在test_map.collect()中:print(i) 和 对于我在test2_map.collect()中:print(i)
还可以尝试以下操作吗?
test = sc.textFile("/user/cloudera/Tests/test")
test_map = test.map(lambda a:a.upper())
test_map = test_map.map(lambda a: (a.split(",")[1],a.split(",")[0]))
for i in test_map.collect(): print(i)
test2=sc.textFile("/user/cloudera/Tests/test2")
test2_map = test2.map(lambda a: a.upper())
test2_map = test2_map.map(lambda a: (a.split(",")[2],")[1]))
for i in test2_map.collect(): print(i)
test_join = test_map.join(test2_map)
for i in test_join.collect(): print(i)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。