如何在Eclispe环境中设置spark的堆大小?

Yas*_*r S 7 eclipse heap-memory apache-spark

我试图在Eclipse中使用spark运行简单的以下代码:

import org.apache.spark.sql.SQLContext
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object jsonreader {  
  def main(args: Array[String]): Unit = {
    println("Hello, world!")
    val conf = new SparkConf()
      .setAppName("TestJsonReader")
      .setMaster("local")
      .set("spark.driver.memory", "3g") 
    val sc = new SparkContext(conf)

    val sqlContext = new SQLContext(sc)
    val df = sqlContext.read.format("json").load("text.json")

    df.printSchema()
    df.show   
  }
}
Run Code Online (Sandbox Code Playgroud)

但是,我收到以下错误:

16/08/18 18:05:28 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
Run Code Online (Sandbox Code Playgroud)

我按照这样的不同教程:如何设置Apache Spark Executor内存.大部分时间我使用--driver-memory选项(Eclipse不可能)或修改spark配置但没有相应的文件.

有没有人知道如何在Eclipse环境中解决这个问题?

aba*_*hel 17

在Eclipse中,转到Run> Run Configurations ...> Arguments> VM arguments并设置max heapsize like -Xmx512m.


Duy*_*Bui 6

我也遇到了这个问题,这就是我解决的方法。认为这可能会有所帮助。

val conf: SparkConf = new SparkConf().setMaster("local[4]").setAppName("TestJsonReader").set("spark.driver.host", "localhost")
conf.set("spark.testing.memory", "2147480000")
Run Code Online (Sandbox Code Playgroud)


小智 5

将脚本修改为conf.set(“spark.testing.memory”,“2147480000”)后,对我来说工作正常

\n\n

完整代码如下:

\n\n
import scala.math.random\nimport org.apache.spark._\n\nobject SparkPi {\n  def main(args: Array[String]) {\n    val conf: SparkConf = new SparkConf().setMaster("local").setAppName("Spark Pi").set("spark.driver.host", "localhost")\n\n     conf.set("spark.testing.memory", "2147480000")         // if you face any memory issues\n\n\n    val spark = new SparkContext(conf)\n    val slices = if (args.length > 0) args(0).toInt else 2\n    val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow\n\n    val count = spark.parallelize(1 until n, slices).map { i =>\n      val x = random * 2 - 1\n      val y = random * 2 - 1\n      if (x * x + y * y < 1) 1 else 0\n    }.reduce(_ + _)\n\n    println("Pi is roughly " + 4.0 * count / n)\n    spark.stop()\n  }\n}\n
Run Code Online (Sandbox Code Playgroud)\n\n
\n\n

第2步

\n\n
Run it as \xe2\x80\x9cScala Application\xe2\x80\x9d\n
Run Code Online (Sandbox Code Playgroud)\n\n

步骤 3\n创建 JAR 文件并执行:

\n\n
bin/spark-submit --class SparkPi --master local SparkPi.jar\n
Run Code Online (Sandbox Code Playgroud)\n