vij*_*jay 6 apache-spark apache-spark-sql spark-dataframe
我试图在嵌套字段上调用partitionBy,如下所示:
val rawJson = sqlContext.read.json(filename)
rawJson.write.partitionBy("data.dataDetails.name").parquet(filenameParquet)
Run Code Online (Sandbox Code Playgroud)
我运行时遇到以下错误.我确实看到"名称"列为以下架构中的字段.是否有不同的格式来指定嵌套的列名?
java.lang.RuntimeException:在模式StructType中找不到分区列data.dataDetails.name(StructField(name,StringType,true),StructField(time,StringType,true),StructField(data,StructType(StructType)(dataDetails,StructType(StructField) (name,StringType,true),StructField(id,StringType,true),true)),true))
这是我的json文件:
{
"name": "AssetName",
"time": "2016-06-20T11:57:19.4941368-04:00",
"data": {
"type": "EventData",
"dataDetails": {
"name": "EventName"
"id": "1234"
}
}
}
Run Code Online (Sandbox Code Playgroud)
这似乎是此处列出的已知问题:https ://issues.apache.org/jira/browse/SPARK-18084
我也遇到了这个问题,为了解决这个问题,我能够取消数据集上列的嵌套。我的数据集与你的数据集有点不同,但这是策略......
原始Json:
{
"name": "AssetName",
"time": "2016-06-20T11:57:19.4941368-04:00",
"data": {
"type": "EventData",
"dataDetails": {
"name": "EventName"
"id": "1234"
}
}
}
Run Code Online (Sandbox Code Playgroud)
修改后的Json:
{
"name": "AssetName",
"time": "2016-06-20T11:57:19.4941368-04:00",
"data_type": "EventData",
"data_dataDetails_name" : "EventName",
"data_dataDetails_id": "1234"
}
}
Run Code Online (Sandbox Code Playgroud)
获取修改后的 Json 的代码:
def main(args: Array[String]) {
...
val data = df.select(children("data", df) ++ $"name" ++ $"time"): _*)
data.printSchema
data.write.partitionBy("data_dataDetails_name").format("csv").save(...)
}
def children(colname: String, df: DataFrame) = {
val parent = df.schema.fields.filter(_.name == colname).head
val fields = parent.dataType match {
case x: StructType => x.fields
case _ => Array.empty[StructField]
}
fields.map(x => col(s"$colname.${x.name}").alias(s"$colname" + s"_" + s"${x.name}"))
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
2286 次 |
| 最近记录: |