小编Use*_*rty的帖子

使用 Spark 或 Hive 读取 avro 文件时出现无效同步错误

我有一个使用 JAVA api 创建的 avro 文件,当编写者在文件中写入数据时,程序由于机器重新启动而异常关闭。\n现在,当我尝试使用 Spark/hive 读取此文件时,它会读取一些数据并然后抛出以下错误 (org.apache.avro.AvroRuntimeException: java.io.IOException: 无效同步!)\xe2\x80\x93

\n
INFO DAGScheduler: ShuffleMapStage 1 (count at DataReaderSpark.java:41) failed in 7.420 s due to Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 2, localhost, executor driver): org.apache.avro.AvroRuntimeException: java.io.IOException: Invalid sync!\n        at org.apache.avro.file.DataFileStream.hasNext(DataFileStream.java:210)\n        at com.databricks.spark.avro.DefaultSource$$anonfun$buildReader$1$$anon$1.hasNext(DefaultSource.scala:215)\n        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)\n        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)\n        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey$(Unknown Source)\n        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)\n        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)\n        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)\n        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)\n        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)\n        at …
Run Code Online (Sandbox Code Playgroud)

hive avro spark-avro avro-tools

5
推荐指数
0
解决办法
1830
查看次数

Parquet 支持哪些压缩类型

我正在使用 spark 以镶木地板格式在 Hadoop 和 hive 上编写数据。我想启用压缩,但我只能找到 2 种压缩类型 - 大多数时候使用 snappy 和 Gzip。Parquet 是否还支持任何其他压缩,如 Deflate 和 lzo?

compression hadoop hive apache-spark parquet

4
推荐指数
2
解决办法
1万
查看次数