微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Apache Spark Word2Vec 抛出 org.apache.spark.shuffle.FetchFailedException:要求失败

如何解决Apache Spark Word2Vec 抛出 org.apache.spark.shuffle.FetchFailedException:要求失败

我使用 pyspark 3.0.0 使用以下配置在 2G 数据上运行 spark word2vec。

spark = SparkSession \
    .builder \
    .appName("word2vec") \
    .master("local[*]") \
    .config("spark.driver.memory","32g") \
    .config("spark.sql.execution.arrow.pyspark.enabled","true") \
    .getorCreate()

运行代码如下,

sentence = sample_corpus.withColumn('sentences',f.split(sample_corpus.sentences,',')).select('sentences')
word2vec = Word2Vec(vectorSize=300,inputCol="sentences",outputCol="result",minCount=10)
model = word2vec.fit(sentence)

但是,它在计算过程中会抛出以下错误,并没有提供很多有用的调试信息。

FetchFailed(BlockManagerId(driver,fedora,34105,None),shuffleId=2,mapIndex=0,mapId=73,reduceId=0,message=
org.apache.spark.shuffle.FetchFailedException: requirement Failed
    at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:748)
    at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:663)
    at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:70)
    at org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29)
    at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:155)
    at org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:50)
    at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:110)
    at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalArgumentException: requirement Failed
    at scala.Predef$.require(Predef.scala:268)
    at org.apache.spark.storage.ShuffleBlockFetcherIterator$SuccessFetchResult.(ShuffleBlockFetcherIterator.scala:981)
    at org.apache.spark.storage.ShuffleBlockFetcherIterator.fetchLocalBlocks(ShuffleBlockFetcherIterator.scala:422)
    at org.apache.spark.storage.ShuffleBlockFetcherIterator.initialize(ShuffleBlockFetcherIterator.scala:536)
    at org.apache.spark.storage.ShuffleBlockFetcherIterator.(ShuffleBlockFetcherIterator.scala:171)
    at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:83)
    ... 14 more

)

我的机器有 64g 内存。我想知道这里的原因是什么,我该如何解决。谢谢。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。