小编use*_*737的帖子

在Spark中导入镶木地板文件时的内存问题

我正在尝试从Scala Spark(1.5)中的镶木地板文件中查询数据,包括200万行的查询(以下代码中的"变体").

val sqlContext = new org.apache.spark.sql.SQLContext(sc)  
sqlContext.sql("SET spark.sql.parquet.binaryAsString=true")

val parquetFile = sqlContext.read.parquet(<path>)

parquetFile.registerTempTable("tmpTable")
sqlContext.cacheTable("tmpTable")

val patients = sqlContext.sql("SELECT DISTINCT patient FROM tmpTable ...)

val variants = sqlContext.sql("SELECT DISTINCT ... FROM tmpTable ... )
Run Code Online (Sandbox Code Playgroud)

当获取的行数较少时,此运行正常,但在请求大量数据时,"大小超过Integer.MAX_VALUE"错误则失败.该错误如下所示:

User class threw exception: org.apache.spark.SparkException:
Job aborted due to stage failure: Task 43 in stage 1.0 failed 4 times,
most recent failure: Lost task 43.3 in stage 1.0 (TID 123, node009):
java.lang.RuntimeException: java.lang.IllegalArgumentException:
Size exceeds Integer.MAX_VALUE at
sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828) at
org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:125) at
org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:113) at ...
Run Code Online (Sandbox Code Playgroud)

我能做些什么来完成这项工作? …

scala apache-spark parquet apache-spark-sql

6
推荐指数
1
解决办法
1975
查看次数

标签 统计

apache-spark ×1

apache-spark-sql ×1

parquet ×1

scala ×1