sop*_*hie 6 serialization closures hadoop scala apache-spark
当我使用在闭包内扩展Serializable的case类或类/对象时,Spark throws Task不可序列化.
object WriteToHbase extends Serializable {
def main(args: Array[String]) {
val csvRows: RDD[Array[String] = ...
val dateFormatter = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
val usersRDD = csvRows.map(row => {
new UserTable(row(0), row(1), row(2), row(9), row(10), row(11))
})
processUsers(sc: SparkContext, usersRDD, dateFormatter)
})
}
def processUsers(sc: SparkContext, usersRDD: RDD[UserTable], dateFormatter: DateTimeFormatter): Unit = {
usersRDD.foreachPartition(part => {
val conf = HBaseConfiguration.create()
val table = new HTable(conf, tablename)
part.foreach(userRow => {
val id = userRow.id
val date1 = dateFormatter.parseDateTime(userRow.date1)
})
table.flushCommits()
table.close()
})
}
Run Code Online (Sandbox Code Playgroud)
我的第一次尝试是使用案例类:
case class UserTable(id: String, name: String, address: String, ...) extends Serializable
Run Code Online (Sandbox Code Playgroud)
我的第二次尝试是使用一个类而不是一个case类:
class UserTable (val id: String, val name: String, val addtess: String, ...) extends Serializable {
}
Run Code Online (Sandbox Code Playgroud)
我的第三次尝试是在课堂上使用伴侣对象:
object UserTable extends Serializable {
def apply(id: String, name: String, address: String, ...) = new UserTable(id, name, address, ...)
}
Run Code Online (Sandbox Code Playgroud)
这是 dateFormatter,我将它放在分区循环中,现在它可以工作了。
usersRDD.foreachPartition(part => {
val id = userRow.id
val dateFormatter = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
val date1 = dateFormatter.parseDateTime(userRow.date1)
})
Run Code Online (Sandbox Code Playgroud)