Fel*_*ida 5 xml scala apache-spark apache-spark-2.0 apache-spark-xml
我想使用spark将大型(51GB)XML文件(在外部硬盘上)读入数据帧(使用spark-xml插件),进行简单的映射/过滤,重新排序,然后将其写回磁盘,作为CSV文件.
但java.lang.OutOfMemoryError: Java heap space无论我如何调整它,我总是得到一个.
我想了解为什么不增加分区数量来阻止OOM错误
它不应该将任务分成更多部分,以便每个部分都更小并且不会导致内存问题吗?
(Spark可能不会尝试将所有东西都填入内存并且如果它不适合就会崩溃,对吧?)
我试过的事情:
spark.memory.fraction0.8(默认值为0.6)spark.memory.storageFraction到0.2(默认为0.5)spark.default.parallelism为30和40(对我来说默认为8)spark.files.maxPartitionBytes为64M(默认为128M)我的所有代码都在这里(注意我没有缓存任何东西):
val df: DataFrame = spark.sqlContext.read
.option("mode", "DROPMALFORMED")
.format("com.databricks.spark.xml")
.schema(customSchema) // defined previously
.option("rowTag", "row")
.load(s"$pathToInputXML")
println(s"\n\nNUM PARTITIONS: ${df.rdd.getNumPartitions}\n\n")
// prints 1604
// i pass `numPartitions` as cli arguments
val df2 = df.coalesce(numPartitions)
// filter and select only the cols i'm interested in
val dsout = df2
.where( df2.col("_TypeId") === "1" )
.select(
df("_Id").as("id"),
df("_Title").as("title"),
df("_Body").as("body"),
).as[Post]
// regexes to clean the text
val tagPat = "<[^>]+>".r
val angularBracketsPat = "><|>|<"
val whitespacePat = """\s+""".r
// more mapping
dsout
.map{
case Post(id,title,body,tags) =>
val body1 = tagPat.replaceAllIn(body,"")
val body2 = whitespacePat.replaceAllIn(body1," ")
Post(id,title.toLowerCase,body2.toLowerCase, tags.split(angularBracketsPat).mkString(","))
}
.orderBy(rand(SEED)) // random sort
.write // write it back to disk
.option("quoteAll", true)
.mode(SaveMode.Overwrite)
.csv(output)
Run Code Online (Sandbox Code Playgroud)
笔记
更新我写了一个较短版本的代码,只读取文件,然后是forEachPartition(println).
我得到了相同的OOM错误:
val df: DataFrame = spark.sqlContext.read
.option("mode", "DROPMALFORMED")
.format("com.databricks.spark.xml")
.schema(customSchema)
.option("rowTag", "row")
.load(s"$pathToInputXML")
.repartition(numPartitions)
println(s"\n\nNUM PARTITIONS: ${df.rdd.getNumPartitions}\n\n")
df
.where(df.col("_PostTypeId") === "1")
.select(
df("_Id").as("id"),
df("_Title").as("title"),
df("_Body").as("body"),
df("_Tags").as("tags")
).as[Post]
.map {
case Post(id, title, body, tags) =>
Post(id, title.toLowerCase, body.toLowerCase, tags.toLowerCase))
}
.foreachPartition { rdd =>
if (rdd.nonEmpty) {
println(s"HI! I'm an RDD and I have ${rdd.size} elements!")
}
}
Run Code Online (Sandbox Code Playgroud)
PS:我正在使用spark v 2.1.0.我的机器有8个核心和16 GB RAM.
因为您要存储 RDD 两次,并且您的逻辑必须像这样更改或使用 SparkSql 进行过滤
val df: DataFrame = SparkFactory.spark.read
.option("mode", "DROPMALFORMED")
.format("com.databricks.spark.xml")
.schema(customSchema) // defined previously
.option("rowTag", "row")
.load(s"$pathToInputXML")
.coalesce(numPartitions)
println(s"\n\nNUM PARTITIONS: ${df.rdd.getNumPartitions}\n\n")
// prints 1604
// regexes to clean the text
val tagPat = "<[^>]+>".r
val angularBracketsPat = "><|>|<"
val whitespacePat = """\s+""".r
// filter and select only the cols i'm interested in
df
.where( df.col("_TypeId") === "1" )
.select(
df("_Id").as("id"),
df("_Title").as("title"),
df("_Body").as("body"),
).as[Post]
.map{
case Post(id,title,body,tags) =>
val body1 = tagPat.replaceAllIn(body,"")
val body2 = whitespacePat.replaceAllIn(body1," ")
Post(id,title.toLowerCase,body2.toLowerCase, tags.split(angularBracketsPat).mkString(","))
}
.orderBy(rand(SEED)) // random sort
.write // write it back to disk
.option("quoteAll", true)
.mode(SaveMode.Overwrite)
.csv(output)
Run Code Online (Sandbox Code Playgroud)