小编pen*_*eng的帖子

Spark Java错误:大小超过Integer.MAX_VALUE

我正在尝试使用spark进行一些简单的机器学习任务.我使用pyspark和spark 1.2.0来做一个简单的逻辑回归问题.我有120万条培训记录,我记录了记录的功能.当我将散列函数的数量设置为1024时,程序运行正常,但是当我将散列函数的数量设置为16384时,程序会多次失败并出现以下错误:

Py4JJavaError: An error occurred while calling o84.trainLogisticRegressionModelWithSGD.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 4.0 failed 4 times, most recent failure: Lost task 1.3 in stage 4.0 (TID 9, workernode0.sparkexperience4a7.d5.internal.cloudapp.net): java.lang.RuntimeException: java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828)
    at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123)
    at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132)
    at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:517)
    at org.apache.spark.storage.BlockManager.getBlockData(BlockManager.scala:307)
    at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$2.apply(NettyBlockRpcServer.scala:57)
    at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$2.apply(NettyBlockRpcServer.scala:57)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
    at org.apache.spark.network.netty.NettyBlockRpcServer.receive(NettyBlockRpcServer.scala:57)
    at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:124)
    at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:97)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:91)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:44)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) …
Run Code Online (Sandbox Code Playgroud)

python java distributed-computing logistic-regression apache-spark

16
推荐指数
2
解决办法
1万
查看次数

火花执行者失败了

我正在使用databricks spark cluster(AW​​S),并测试我的scala实验.使用LogisticRegressionWithLBFGS算法训练10 GB数据时遇到了一些问题.我遇到问题的代码块如下:

import org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
val algorithm = new LogisticRegressionWithLBFGS()
algorithm.run(training_set)
Run Code Online (Sandbox Code Playgroud)

首先,我有很多执行程序丢失失败和java内存问题,然后我用更多分区重新分区我的training_set并且内存不足问题已经消失,但仍然得到执行程序丢失失败.

我的群集共有72个核心和500GB内存.任何人都能对此有所了解吗?

scala out-of-memory executor apache-spark

13
推荐指数
1
解决办法
4090
查看次数