相关疑难解决方法(0)

接收TimeoutException的可能原因是:使用Spark时,[n秒]之后的期货超时

我正在研究Spark SQL程序,我收到以下异常:

16/11/07 15:58:25 ERROR yarn.ApplicationMaster: User class threw exception: java.util.concurrent.TimeoutException: Futures timed out after [3000 seconds]
java.util.concurrent.TimeoutException: Futures timed out after [3000 seconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
    at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    at scala.concurrent.Await$.result(package.scala:190)
    at org.apache.spark.sql.execution.joins.BroadcastHashJoin.doExecute(BroadcastHashJoin.scala:107)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.Union$$anonfun$doExecute$1.apply(basicOperators.scala:144)
    at org.apache.spark.sql.execution.Union$$anonfun$doExecute$1.apply(basicOperators.scala:144)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
    at scala.collection.immutable.List.map(List.scala:285)
    at org.apache.spark.sql.execution.Union.doExecute(basicOperators.scala:144)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.columnar.InMemoryRelation.buildBuffers(InMemoryColumnarTableScan.scala:129) …
Run Code Online (Sandbox Code Playgroud)

scala apache-spark apache-spark-sql spark-dataframe

16
推荐指数
1
解决办法
2万
查看次数

错误yarn.ApplicationMaster:未捕获异常:java.util.concurrent.TimeoutException:期货在100000毫秒后超时

我在我的spark应用程序中有这个问题,我使用1.6 spark版本,scala 2.10:

17/10/23 14:32:15 ERROR yarn.ApplicationMaster: Uncaught exception: 
java.util.concurrent.TimeoutException: Futures timed out after [100000
milliseconds]at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107) at
org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:342)
at
org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:197)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:680)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68)
at
org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:678)
at
org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
17/10/23 14:32:15 INFO yarn.ApplicationMaster: Final app status:
FAILED, exitCode: 10, (reason: Uncaught exception:
java.util.concurrent.TimeoutException: Futures timed out after [100000
milliseconds]) 17/10/23 14:32:15 INFO spark.SparkContext: Invoking
stop() from shutdown hook 17/10/23 14:32:15 INFO …
Run Code Online (Sandbox Code Playgroud)

akka apache-spark apache-spark-sql

7
推荐指数
2
解决办法
4999
查看次数

显示所有作业完成后,Spark作业重新启动,然后失败(TimeoutException:[300秒]之后,期货超时)

我正在做火花工作。它显示所有工作均已完成: 在此处输入图片说明

但是,几分钟后,整个作业将重新启动,这一次它将显示所有作业和任务也已完成,但是几分钟后,它将失败。我在日志中发现了此异常:

java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
Run Code Online (Sandbox Code Playgroud)

因此,当我尝试连接2个非常大的表时就会发生这种情况:3B行之一,第二行为200M行,当我show(100)在结果数据帧上运行时,所有内容都经过评估,而我遇到了这个问题。

我尝试增加/减少分区数,然后通过增加线程数将垃圾收集器更改为G1。我更改spark.sql.broadcastTimeout为600(这使超时消息更改为600秒)。

我还读到这可能是一个通信问题,但是show()在此代码段之前运行的其他子句可以正常工作,所以可能不是。

这是submit命令:

/opt/spark/spark-1.4.1-bin-hadoop2.3/bin/spark-submit  --master yarn-cluster --class className --executor-memory 12g --executor-cores 2 --driver-memory 32g --driver-cores 8 --num-executors 40 --conf "spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:ConcGCThreads=20" /home/asdf/fileName-assembly-1.0.jar
Run Code Online (Sandbox Code Playgroud)

您可以了解Spark版本以及从那里使用的资源。

我从这里去哪里?我们将不胜感激,如有需要,还将提供代码段/其他日志记录。

scala apache-spark apache-spark-sql spark-dataframe

5
推荐指数
1
解决办法
5925
查看次数

当指定存储级别时,在 pyspark2 中保留数据帧不起作用。我究竟做错了什么?

我试图在执行连接之前保留两个非常大的数据帧以解决“java.util.concurrent.TimeoutException:Futures timed out...”问题(参考:为什么连接因“java.util.concurrent.TimeoutException”而失败:期货在 [300 秒] 后超时”?)。

Persist() 单独可以工作,但是当我尝试指定存储级别时,我收到名称错误。

我尝试过以下方法:

df.persist(pyspark.StorageLevel.MEMORY_ONLY) 
NameError: name 'MEMORY_ONLY' is not defined

df.persist(StorageLevel.MEMORY_ONLY) 
NameError: name 'StorageLevel' is not defined

import org.apache.spark.storage.StorageLevel 
ImportError: No module named org.apache.spark.storage.StorageLevel
Run Code Online (Sandbox Code Playgroud)

任何帮助将不胜感激。

apache-spark apache-spark-sql pyspark

2
推荐指数
1
解决办法
5893
查看次数