相关疑难解决方法(0)

Spark java.lang.OutOfMemoryError:Java堆空间

我的集群:1个主服务器,11个从服务器,每个节点有6 GB内存.

我的设置:

spark.executor.memory=4g, Dspark.akka.frameSize=512
Run Code Online (Sandbox Code Playgroud)

这是问题所在:

首先,我从HDFS到RDD读取了一些数据(2.19 GB):

val imageBundleRDD = sc.newAPIHadoopFile(...)
Run Code Online (Sandbox Code Playgroud)

其次,在这个RDD上做点什么:

val res = imageBundleRDD.map(data => {
                               val desPoints = threeDReconstruction(data._2, bg)
                                 (data._1, desPoints)
                             })
Run Code Online (Sandbox Code Playgroud)

最后,输出到HDFS:

res.saveAsNewAPIHadoopFile(...)
Run Code Online (Sandbox Code Playgroud)

当我运行我的程序时,它显示:

.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO …
Run Code Online (Sandbox Code Playgroud)

out-of-memory apache-spark

208
推荐指数
9
解决办法
21万
查看次数

Spark - repartition()vs coalesce()

根据Learning Spark的说法

请记住,重新分区数据是一项相当昂贵的操作.Spark还有一个优化版本的repartition(),称为coalesce(),它允许避免数据移动,但前提是你减少了RDD分区的数量.

我得到的一个区别是,使用repartition()可以增加/减少分区数量,但是使用coalesce()时,只能减少分区数量.

如果分区分布在多台机器上并运行coalesce(),它如何避免数据移动?

distributed-computing apache-spark rdd

208
推荐指数
13
解决办法
15万
查看次数

按键分组时Spark会耗尽内存

我试图使用本指南在EC2上使用Spark主机对常见爬网数据进行简单转换,我的代码如下所示:

package ccminer

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

object ccminer {
  val english = "english|en|eng"
  val spanish = "es|esp|spa|spanish|espanol"
  val turkish = "turkish|tr|tur|turc"
  val greek = "greek|el|ell"
  val italian = "italian|it|ita|italien"
  val all = (english :: spanish :: turkish :: greek :: italian :: Nil).mkString("|")

  def langIndep(s: String) = s.toLowerCase().replaceAll(all, "*")

  def main(args: Array[String]): Unit = {
    if (args.length != 3) {
      System.err.println("Bad command line")
      System.exit(-1)
    }

    val cluster = "spark://???"
    val sc = new SparkContext(cluster, "Common Crawl Miner", …
Run Code Online (Sandbox Code Playgroud)

scala amazon-ec2 apache-spark

12
推荐指数
2
解决办法
1万
查看次数

来自 .. 的无效状态代码 '400' 错误有效负载:“要求失败:会话未激活

我正在运行 Pyspark 脚本以将数据帧写入 jupyter Notebook 中的 csv,如下所示:

df.coalesce(1).write.csv('Data1.csv',header = 'true')
Run Code Online (Sandbox Code Playgroud)

运行一个小时后,我收到以下错误。

错误:来自http://.....session 的无效状态代码未激活。

我的配置是这样的:

spark.conf.set("spark.dynamicAllocation.enabled","true")
spark.conf.set("shuffle.service.enabled","true")
spark.conf.set("spark.dynamicAllocation.minExecutors",6)
spark.conf.set("spark.executor.heartbeatInterval","3600s")
spark.conf.set("spark.cores.max", "4")
spark.conf.set("spark.sql.tungsten.enabled", "true")
spark.conf.set("spark.eventLog.enabled", "true")
spark.conf.set("spark.app.id", "Logs")
spark.conf.set("spark.io.compression.codec", "snappy")
spark.conf.set("spark.rdd.compress", "true")
spark.conf.set("spark.executor.instances", "6")
spark.conf.set("spark.executor.memory", '20g')
spark.conf.set("hive.exec.dynamic.partition", "true")
spark.conf.set("hive.exec.dynamic.partition.mode", "nonstrict")
spark.conf.set("spark.driver.allowMultipleContexts", "true")
spark.conf.set("spark.master", "yarn")
spark.conf.set("spark.driver.memory", "20G")
spark.conf.set("spark.executor.instances", "32")
spark.conf.set("spark.executor.memory", "32G")
spark.conf.set("spark.driver.maxResultSize", "40G")
spark.conf.set("spark.executor.cores", "5")
Run Code Online (Sandbox Code Playgroud)

我检查了容器节点,错误是:

ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed:container_e836_1556653519610_3661867_01_000005 on host: ylpd1205.kmdc.att.com. Exit status: 143. Diagnostics: Container killed …
Run Code Online (Sandbox Code Playgroud)

apache-spark pyspark livy

4
推荐指数
2
解决办法
3663
查看次数