Python pandas_udf火花错误

Shr*_*kar 2 pandas apache-spark pyspark pyarrow

我开始在本地玩火花,发现这个奇怪的问题

    1)点安装pyspark == 2.3.1
    2)pyspark>

    将熊猫作为pd导入
    从pyspark.sql.functions导入pandas_udf,PandasUDFType,udf
    df = pd.DataFrame({'x':[1,2,3],'y':[1.0,2.0,3.0]})
    sp_df = spark.createDataFrame(df)

    @pandas_udf('long',PandasUDFType.SCALAR)
    def pandas_plus_one(v):
        返回v + 1

    sp_df.withColumn('v2',pandas_plus_one(sp_df.x))。show()

从这里以这个例子https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html

知道为什么我会不断收到此错误吗?

py4j.protocol.Py4JJavaError:调用o108.showString时发生错误。
:org.apache.spark.SparkException:作业由于阶段失败而中止:阶段3.0中的任务0失败1次,最近一次失败:阶段3.0中的任务0.0(TID 8,本地主机,执行程序驱动程序)丢失:org.apache.spark .SparkException:Python worker意外退出(崩溃)
    在org.apache.spark.api.python.BasePythonRunner $ ReaderIterator $$ anonfun $ 1.applyOrElse(PythonRunner.scala:333)
    在org.apache.spark.api.python.BasePythonRunner $ ReaderIterator $$ anonfun $ 1.applyOrElse(PythonRunner.scala:322)中
    在scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
    在org.apache.spark.sql.execution.python.ArrowPythonRunner $$ anon $ 1.read(ArrowPythonRunner.scala:177)
    在org.apache.spark.sql.execution.python.ArrowPythonRunner $$ anon $ 1.read(ArrowPythonRunner.scala:121)
    在org.apache.spark.api.python.BasePythonRunner $ ReaderIterator.hasNext(PythonRunner.scala:252)
    在org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    在org.apache.spark.sql.execution.python.ArrowEvalPythonExec $$ anon $ 2。(ArrowEvalPythonExec.scala:90)
    在org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:88)
    在org.apache.spark.sql.execution.python.EvalPythonExec $$ anonfun $ doExecute $ 1.apply(EvalPythonExec.scala:131)
    在org.apache.spark.sql.execution.python.EvalPythonExec $$ anonfun $ doExecute $ 1.apply(EvalPythonExec.scala:93)
    在org.apache.spark.rdd.RDD $$ anonfun $ mapPartitions $ 1 $ anonfun $ apply $ 23.apply(RDD.scala:800)
    在org.apache.spark.rdd.RDD $$ anonfun $ mapPartitions $ 1 $ anonfun $ apply $ 23.apply(RDD.scala:800)
    在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    在org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    在org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    在org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    在org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    在org.apache.spark.scheduler.Task.run(Task.scala:109)
    在org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:345)
    在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    在java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617)
    在java.lang.Thread.run(Thread.java:745)
引起原因:java.io.EOFException
    在java.io.DataInputStream.readInt(DataInputStream.java:392)
    在org.apache.spark.sql.execution.python.ArrowPythonRunner $$ anon $ 1.read(ArrowPythonRunner.scala:158)
    ...另外27个

小智 10

我有同样的问题。我发现这是熊猫和numpy之间的版本问题。

对我而言,以下作品:

numpy==1.14.5
pandas==0.23.4
pyarrow==0.10.0
Run Code Online (Sandbox Code Playgroud)

在进行以下非工作组合之前:

numpy==1.15.1
pandas==0.23.4
pyarrow==0.10.0
Run Code Online (Sandbox Code Playgroud)