ana*_*abu 1 scala case-class apache-spark
我的数据框架构如下所示,要创建手动架构,请创建案例类。
|-- _id: struct (nullable = true)
| |-- oid: string (nullable = true)
|-- message: string (nullable = true)
|-- powerData: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- current: array (nullable = true)
| | | |-- element: double (containsNull = true)
| | |-- delayStartTime: double (nullable = true)
| | |-- idSub1: string (nullable = true)
| | |-- motorNumber: integer (nullable = true)
| | |-- power: array (nullable = true)
| | | |-- element: double (containsNull = true)
Run Code Online (Sandbox Code Playgroud)
我创建了这样的case类,但不确定如何在此case类中声明StructFields。
case class currentSchema(_id: StructType, message: String, powerData: Array[StructType]
Run Code Online (Sandbox Code Playgroud)
针对我的DF应用架构时收到此错误。
val dfRef = MongoSpark.load[currentSchema](sparkSessionRef)
Exception in thread "main" scala.MatchError: org.apache.spark.sql.types.StructType (of class scala.reflect.internal.Types$ClassNoArgsTypeRef)
Run Code Online (Sandbox Code Playgroud)
有人这样做吗?寻求帮助。
提前致谢。
您将必须为每个结构创建单独的案例类。
case class idStruct(old: String)
case class pdStruct(current: Array[Double], delayStartTime: Double, idSub1: String, motorNumber: Int, power: Array[Double])
case class currentSchema(_id: idStruct, message: String, powerData: Array[pdStruct])
Run Code Online (Sandbox Code Playgroud)