Apache Spark:由于阶段失败导致作业中止:"TID x因未知原因失败"

Mag*_*sol 11 python apache-spark

我正在处理一些奇怪的错误消息,我认为这些消息归结为内存问题,但我很难将其固定下来并可以使用专家的一些指导.

我有一台2机Spark(1.0.1)集群.两台机器都有8个核心; 一个有16GB内存,另一个有32GB(这是主机).我的应用涉及计算图像中的成对像素亲和力,尽管到目前为止我测试过的图像只有1920x1200,小到16x16.

我确实需要更改一些内存和并行设置,否则我会得到明确的OutOfMemoryExceptions.在spark-default.conf中:

spark.executor.memory    14g
spark.default.parallelism    32
spark.akka.frameSize        1000
Run Code Online (Sandbox Code Playgroud)

在spark-env.sh中:

SPARK_DRIVER_MEMORY=10G
Run Code Online (Sandbox Code Playgroud)

但是,通过这些设置,除了丢失的执行程序之外,我还得到了一堆关于"丢失的TID"(没有任务成功完成)的WARN语句,这些语句重复4次,直到我最终得到以下错误消息并崩溃:

14/07/18 12:06:20 INFO TaskSchedulerImpl: Cancelling stage 0
14/07/18 12:06:20 INFO DAGScheduler: Failed to run collect at /home/user/Programming/PySpark-Affinities/affinity.py:243
Traceback (most recent call last):
  File "/home/user/Programming/PySpark-Affinities/affinity.py", line 243, in <module>
    lambda x: np.abs(IMAGE.value[x[0]] - IMAGE.value[x[1]])
  File "/net/antonin/home/user/Spark/spark-1.0.1-bin-hadoop2/python/pyspark/rdd.py", line 583, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/net/antonin/home/user/Spark/spark-1.0.1-bin-hadoop2/python/lib/py4j-0.8.1-src.zip/py4j/java_gateway.py", line 537, in __call__
  File "/net/antonin/home/user/Spark/spark-1.0.1-bin-hadoop2/python/lib/py4j-0.8.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o27.collect.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:13 failed 4 times, most recent failure: TID 32 on host master.host.univ.edu failed for unknown reason
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1044)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1028)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1026)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1026)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:634)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:634)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:634)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1229)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

14/07/18 12:06:20 INFO DAGScheduler: Executor lost: 4 (epoch 4)
14/07/18 12:06:20 INFO BlockManagerMasterActor: Trying to remove executor 4 from BlockManagerMaster.
14/07/18 12:06:20 INFO BlockManagerMaster: Removed 4 successfully in removeExecutor
user@master:~/Programming/PySpark-Affinities$
Run Code Online (Sandbox Code Playgroud)

如果我运行非常小的图像(16x16),它似乎运行完成(给我输出我期望没有任何异常被抛出).但是,在运行的应用程序的stderr日志中,它将状态列为"KILLED",最后消息为"ERROR CoarseGrainedExecutorBackend:Driver Disassociated".如果我运行任何更大的图像,我得到上面粘贴的异常.

此外,如果我只是做一个spark-submit master=local[*],除了仍然需要设置上述内存选项之外,它将适用于任何大小的图像(我已经独立测试了两台机器;它们都在运行时执行此操作local[*]).

有什么想法发生了什么?

sam*_*est 10

如果我每次问人们都有一分钱"你是否尝试过将分区数量增加到相当大的范围,比如每个CPU至少有4个任务 - 就像高达1000个分区一样?" 我会成为一个有钱人.所以你试过增加分区吗?

无论如何,我发现其他有助于解决奇怪的不满的事情是:

有时,通过使用UI导航到特定的worker stderr日志,您可以获得更多信息性的堆栈跟踪.

更新:由于spark 1.0.0无法通过UI完成Spark日志,因此您必须要求您的sysadm/devops帮助您,因为日志的位置完全没有记录.

  • 请你能分享一个例子,我也坚持使用相同的问题 (6认同)