sta*_*010 8 scala apache-spark apache-spark-sql
我正在使用带有数据帧的Spark SQL.我有一个输入数据帧,我想将其行追加(或插入)到具有更多列的更大的数据帧.我该怎么办?
如果这是SQL,我会使用INSERT INTO OUTPUT SELECT ... FROM INPUT,但我不知道如何使用Spark SQL.
具体:
var input = sqlContext.createDataFrame(Seq(
(10L, "Joe Doe", 34),
(11L, "Jane Doe", 31),
(12L, "Alice Jones", 25)
)).toDF("id", "name", "age")
var output = sqlContext.createDataFrame(Seq(
(0L, "Jack Smith", 41, "yes", 1459204800L),
(1L, "Jane Jones", 22, "no", 1459294200L),
(2L, "Alice Smith", 31, "", 1459595700L)
)).toDF("id", "name", "age", "init", "ts")
scala> input.show()
+---+-----------+---+
| id| name|age|
+---+-----------+---+
| 10| Joe Doe| 34|
| 11| Jane Doe| 31|
| 12|Alice Jones| 25|
+---+-----------+---+
scala> input.printSchema()
root
|-- id: long (nullable = false)
|-- name: string (nullable = true)
|-- age: integer (nullable = false)
scala> output.show()
+---+-----------+---+----+----------+
| id| name|age|init| ts|
+---+-----------+---+----+----------+
| 0| Jack Smith| 41| yes|1459204800|
| 1| Jane Jones| 22| no|1459294200|
| 2|Alice Smith| 31| |1459595700|
+---+-----------+---+----+----------+
scala> output.printSchema()
root
|-- id: long (nullable = false)
|-- name: string (nullable = true)
|-- age: integer (nullable = false)
|-- init: string (nullable = true)
|-- ts: long (nullable = false)
Run Code Online (Sandbox Code Playgroud)
我想将所有行追加input到末尾output.同时,我想将output列设置为init空字符串'',并将ts列设置为当前时间戳,例如1461883875L.
任何帮助,将不胜感激.
zer*_*323 21
Spark DataFrames是不可变的,因此无法追加/插入行.相反,您只需添加缺少的列并使用UNION ALL:
output.unionAll(input.select($"*", lit(""), current_timestamp.cast("long")))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
40717 次 |
| 最近记录: |