pyspark:重新分区后出现“太多值”错误

use*_*155 5 python apache-spark rdd apache-spark-sql pyspark

我有一个 DataFrame(转换为 RDD)并且想要重新分区,以便每个键(第一列)都有自己的分区。这就是我所做的:

# Repartition to # key partitions and map each row to a partition given their key rank
my_rdd = df.rdd.partitionBy(len(keys), lambda row: int(row[0]))
Run Code Online (Sandbox Code Playgroud)

但是,当我尝试将其映射回 DataFrame 或保存它时,我收到此错误:

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "spark-1.5.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
        process()
      File "spark-1.5.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",     line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
  File "spark-1.5.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream
    for obj in iterator:
  File "spark-1.5.1-bin-hadoop2.6/python/pyspark/rdd.py", line 1703, in add_shuffle_key
    for k, v in iterator:
ValueError: too many values to unpack

        at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
        at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more
Run Code Online (Sandbox Code Playgroud)

更多测试表明,即使这样也会导致相同的错误: my_rdd = df.rdd.partitionBy(x) # x = can be 5, 100, etc

你们中有人以前遇到过这种情况吗?如果是的话你是怎么解决的?

zer*_*323 4

partitionBy要求 aPairwiseRDD在 Python 中相当于RDD 长度为 2 的元组(列表),其中第一个元素是键,第二个元素是值。

partitionFunc获取密钥并将其映射到分区号。当您在 a 上使用它时,RDD[Row]它会尝试将行解压缩为键和值,但失败:

from pyspark.sql import Row

row = Row(1, 2, 3)
k, v = row

## Traceback (most recent call last):
##   ...
## ValueError: too many values to unpack (expected 2)
Run Code Online (Sandbox Code Playgroud)

即使您提供了正确的数据,也可以执行以下操作:

my_rdd = (df.rdd.map(lambda row: (int(row[0]), row)).partitionBy(len(keys))
Run Code Online (Sandbox Code Playgroud)

这确实没有意义。对于 的情况,分区并不是特别有意义DataFrames。请参阅如何定义 DataFrame 分区?更多细节。