如何使用更改后的架构从 Spark 写入 Kafka 而不会出现异常?

Hex*_*rks 4 scala apache-kafka apache-spark parquet databricks

我正在将镶木地板文件从 Databricks 加载到 Spark:

val dataset = context.session.read().parquet(parquetPath)
Run Code Online (Sandbox Code Playgroud)

然后我执行一些像这样的转换:

val df = dataset.withColumn(
            columnName, concat_ws("",
            col(data.columnName), lit(textToAppend)))
Run Code Online (Sandbox Code Playgroud)

当我尝试将其作为 JSON 保存到 Kafka 时(而不是返回到 parquet!):

df = df.select(
            lit("databricks").alias("source"),
            struct("*").alias("data"))

val server = "kafka.dev.server" // some url
df = dataset.selectExpr("to_json(struct(*)) AS value")
df.write()
        .format("kafka")
        .option("kafka.bootstrap.servers", server)
        .option("topic", topic)
        .save()
Run Code Online (Sandbox Code Playgroud)

我得到以下异常:

org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot be converted in file dbfs:/mnt/warehouse/part-00001-tid-4198727867000085490-1e0230e7-7ebc-4e79-9985-0a131bdabee2-4-c000.snappy.parquet. Column: [item_group_id], Expected: StringType, Found: INT32
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anonfun$prepareNextFile$1.apply(FileScanRDD.scala:310)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anonfun$prepareNextFile$1.apply(FileScanRDD.scala:287)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException
    at com.databricks.sql.io.parquet.NativeColumnReader.readBatch(NativeColumnReader.java:448)
    at com.databricks.sql.io.parquet.DatabricksVectorizedParquetRecordReader.nextBatch(DatabricksVectorizedParquetRecordReader.java:330)
    at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:167)
    at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:40)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anonfun$prepareNextFile$1.apply(FileScanRDD.scala:299)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anonfun$prepareNextFile$1.apply(FileScanRDD.scala:287)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Run Code Online (Sandbox Code Playgroud)

仅当我尝试读取多个分区时才会发生这种情况。例如,在/mnt/warehouse/目录中,我有很多镶木地板文件,每个文件代表来自datestamp. 如果我只读取其中之一,则不会出现异常,但如果我读取整个目录,则会发生此异常。

当我进行转换时,我会得到这个,就像上面我更改列的数据类型一样。我怎样才能解决这个问题?我并不是想写回 parquet,而是将所有文件从同一源模式转换为新模式并将它们写入 Kafka。

Sha*_*ica 6

镶木地板文件似乎存在问题。文件中的列item_group_id并不都是相同的数据类型,有些文件将列存储为字符串,而另一些文件将列存储为整数。从异常SchemaColumnConvertNotSupportedException的源码中我们看到描述:

当 parquet 读取器发现列类型不匹配时引发异常。

可以在github上的 Spark 测试中找到复制该问题的简单方法:

Seq(("bcd", 2)).toDF("a", "b").coalesce(1).write.mode("overwrite").parquet(s"$path/parquet")
Seq((1, "abc")).toDF("a", "b").coalesce(1).write.mode("append").parquet(s"$path/parquet")

spark.read.parquet(s"$path/parquet").collect()
Run Code Online (Sandbox Code Playgroud)

当然,只有在一次读取多个文件时,或者如上面的测试中附加了更多数据时,才会发生这种情况。如果读取单个文件,则列的数据类型之间不会存在不匹配问题。


解决该问题的最简单方法是确保写入时所有文件的列类型正确在写入文件

一种方法是单独读取所有 parquet 文件,更改模式以匹配,然后将它们与union. 执行此操作的一个简单方法是调整架构:

// Specify the files and read as separate dataframes
val files = Seq(...)
val dfs = files.map(file => spark.read.parquet(file))

// Specify the schema (here the schema of the first file is used)
val schema = dfs.head.schema

// Create new columns with the correct names and types
val newCols = schema.map(c => col(c.name).cast(c.dataType))

// Select the new columns and merge the dataframes
val df = dfs.map(_.select(newCols: _*)).reduce(_ union _)
Run Code Online (Sandbox Code Playgroud)

  • 非常感谢,这就是问题所在。源文件有错误的数据。感谢您指出了这一点! (2认同)