PySpark序列化EOFError

Tom*_*ace 27 python apache-spark pyspark apache-spark-1.6

我正在阅读CSV作为Spark DataFrame并在其上执行机器学习操作.我一直在获取Python序列化EOFError - 任何想法为什么?我认为这可能是一个内存问题 - 即文件超出可用RAM - 但是大幅减小DataFrame的大小并没有阻止EOF错误.

玩具代码和错误如下.

#set spark context
conf = SparkConf().setMaster("local").setAppName("MyApp")
sc = SparkContext(conf = conf)
sqlContext = SQLContext(sc)

#read in 500mb csv as DataFrame
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true',
     inferschema='true').load('myfile.csv')

#get dataframe into machine learning format
r_formula = RFormula(formula = "outcome ~ .")
mldf = r_formula.fit(df).transform(df)

#fit random forest model
rf = RandomForestClassifier(numTrees = 3, maxDepth = 2)
model = rf.fit(mldf)
result = model.transform(mldf).head()
Run Code Online (Sandbox Code Playgroud)

spark-submit在单个节点上运行上述代码会重复抛出以下错误,即使在拟合模型之前减小了DataFrame的大小(例如tinydf = df.sample(False, 0.00001):

Traceback (most recent call last):
  File "/home/hduser/spark1.6/python/lib/pyspark.zip/pyspark/daemon.py", line 157, 
     in manager
  File "/home/hduser/spark1.6/python/lib/pyspark.zip/pyspark/daemon.py", line 61, 
     in worker
  File "/home/hduser/spark1.6/python/lib/pyspark.zip/pyspark/worker.py", line 136, 
     in main if read_int(infile) == SpecialLengths.END_OF_STREAM:
  File "/home/hduser/spark1.6/python/lib/pyspark.zip/pyspark/serializers.py", line 545, 
     in read_int
    raise EOFError
  EOFError
Run Code Online (Sandbox Code Playgroud)

Abh*_*k P 3

该错误似乎发生在 pySpark read_int 函数中。来自spark站点的代码如下:

def read_int(stream):
length = stream.read(4)
if not length:
    raise EOFError
return struct.unpack("!i", length)[0]
Run Code Online (Sandbox Code Playgroud)

这意味着当从流中读取 4 个字节时,如果读取 0 个字节,则会引发 EOF 错误。python 文档在这里