我的集群:1个主服务器,11个从服务器,每个节点有6 GB内存.
我的设置:
spark.executor.memory=4g, Dspark.akka.frameSize=512
Run Code Online (Sandbox Code Playgroud)
这是问题所在:
首先,我从HDFS到RDD读取了一些数据(2.19 GB):
val imageBundleRDD = sc.newAPIHadoopFile(...)
Run Code Online (Sandbox Code Playgroud)
其次,在这个RDD上做点什么:
val res = imageBundleRDD.map(data => {
val desPoints = threeDReconstruction(data._2, bg)
(data._1, desPoints)
})
Run Code Online (Sandbox Code Playgroud)
最后,输出到HDFS:
res.saveAsNewAPIHadoopFile(...)
Run Code Online (Sandbox Code Playgroud)
当我运行我的程序时,它显示:
.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO …Run Code Online (Sandbox Code Playgroud)