我需要使用sql方法帮助SparkSQL中的嵌套结构.我在现有RDD(dataRDD)之上创建了一个数据框,结构如下:
schema=StructType([ StructField("m",LongType()) ,
StructField("field2", StructType([
StructField("st",StringType()),
StructField("end",StringType()),
StructField("dr",IntegerType()) ]) )
])
Run Code Online (Sandbox Code Playgroud)
printSchema()返回:
root
|-- m: long (nullable = true)
|-- field2: struct (nullable = true)
| |-- st: string (nullable = true)
| |-- end: string (nullable = true)
| |-- dr: integer (nullable = true)
Run Code Online (Sandbox Code Playgroud)
从数据RDD创建数据框并应用架构效果很好.
df= sqlContext.createDataFrame( dataRDD, schema )
df.registerTempTable( "logs" )
Run Code Online (Sandbox Code Playgroud)
但检索数据不起作用:
res = sqlContext.sql("SELECT m, field2.st FROM logs") # <- This fails
...org.apache.spark.sql.AnalysisException: cannot resolve 'field.st' given input columns msisdn, field2;
res = sqlContext.sql("SELECT …Run Code Online (Sandbox Code Playgroud)