myt*_*hic 2 apache-spark parquet spark-streaming hoodie apache-hudi
我使用 Spark 将 json 数据写入 s3。但是,我不断收到以下错误。我们使用 apache hudi 进行更新。这只发生在某些数据上,其他一切都正常。
Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 1 in block 0
in file s3a://<path to parquet file>
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.ja va:251)
App > at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
App > at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:136)
App > at com.uber.hoodie.func.ParquetReaderIterator.hasNext(ParquetReaderIterator.java:45)
App > at com.uber.hoodie.common.util.queue.IteratorBasedQueueProducer.produce(IteratorBasedQueueProducer.java:44)
App > at com.uber.hoodie.common.util.queue.BoundedInMemoryExecutor.lambda$null$0(BoundedInMemoryExecutor.java:94)
App > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
App > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
App > ... 4 more
App > Caused by: java.lang.UnsupportedOperationException:org.apache.parquet.avro.AvroConverters$FieldLongConverter
Run Code Online (Sandbox Code Playgroud)
我无法理解。我跟踪了几个线程并在我的 Spark confs 中设置 --conf "spark.sql.parquet.writeLegacyFormat=true" 。但即使这样也无济于事。
| 归档时间: |
|
| 查看次数: |
5149 次 |
| 最近记录: |