Jim*_*cks 10 dataframe apache-spark apache-spark-sql
我正在尝试找到将整个Spark数据帧转换为scala Map集合的最佳解决方案.最好说明如下:
从这里开始(在Spark示例中):
val df = sqlContext.read.json("examples/src/main/resources/people.json")
df.show
+----+-------+
| age| name|
+----+-------+
|null|Michael|
| 30| Andy|
| 19| Justin|
+----+-------+
Run Code Online (Sandbox Code Playgroud)
Scala集合(Map of Maps)代表如下:
val people = Map(
Map("age" -> null, "name" -> "Michael"),
Map("age" -> 30, "name" -> "Andy"),
Map("age" -> 19, "name" -> "Justin")
)
Run Code Online (Sandbox Code Playgroud)
Dav*_*fin 14
我不认为你的问题是有道理的 - 你的最外层Map
,我只看到你试图填充价值 - 你需要在你的最外层拥有键/值对Map
.话虽如此:
val peopleArray = df.collect.map(r => Map(df.columns.zip(r.toSeq):_*))
Run Code Online (Sandbox Code Playgroud)
会给你:
Array(
Map("age" -> null, "name" -> "Michael"),
Map("age" -> 30, "name" -> "Andy"),
Map("age" -> 19, "name" -> "Justin")
)
Run Code Online (Sandbox Code Playgroud)
那时你可以这样做:
val people = Map(peopleArray.map(p => (p.getOrElse("name", null), p)):_*)
Run Code Online (Sandbox Code Playgroud)
哪个会给你:
Map(
("Michael" -> Map("age" -> null, "name" -> "Michael")),
("Andy" -> Map("age" -> 30, "name" -> "Andy")),
("Justin" -> Map("age" -> 19, "name" -> "Justin"))
)
Run Code Online (Sandbox Code Playgroud)
我猜这真的更像你想要的.如果你想在任意Long
索引上键入它们,你可以这样做:
val indexedPeople = Map(peopleArray.zipWithIndex.map(r => (r._2, r._1)):_*)
Run Code Online (Sandbox Code Playgroud)
哪个给你:
Map(
(0 -> Map("age" -> null, "name" -> "Michael")),
(1 -> Map("age" -> 30, "name" -> "Andy")),
(2 -> Map("age" -> 19, "name" -> "Justin"))
)
Run Code Online (Sandbox Code Playgroud)
首先从 Dataframe 获取架构
val schemaList = dataframe.schema.map(_.name).zipWithIndex//get schema list from dataframe
Run Code Online (Sandbox Code Playgroud)
从数据框中获取 rdd 并使用它进行映射
dataframe.rdd.map(row =>
//here rec._1 is column name and rce._2 index
schemaList.map(rec => (rec._1, row(rec._2))).toMap
).collect.foreach(println)
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
13317 次 |
最近记录: |