ksi*_*ndi 7 python amazon-s3 apache-spark parquet pyspark
我在S3中通过nyc_date以格式分区的镶木地板数据s3://mybucket/mykey/nyc_date=Y-m-d/*.gz.parquet.
我有一个DateType列event_date,当我尝试从S3读取并使用EMR写入hdfs时,由于某种原因抛出此错误.
from pyspark.sql import SparkSession
spark = SparkSession.builder.enableHiveSupport().getOrCreate()
df = spark.read.parquet('s3a://mybucket/mykey/')
df.limit(100).write.parquet('hdfs:///output/', compression='gzip')
Run Code Online (Sandbox Code Playgroud)
错误:
java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
at org.apache.parquet.column.Dictionary.decodeToInt(Dictionary.java:48)
at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:233)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)
这是我想出的:
event_date不会导致任何错误.'s3a://mybucket/mykey/*/*.gz.parquet'仍然会引发错误.真的很奇怪,这只会导致DateType列的错误.我没有任何其他DateType列.
使用Spark 2.0.2和EMR 5.2.0.
| 归档时间: |
|
| 查看次数: |
5385 次 |
| 最近记录: |