Jiv*_*nut 4 python apache-spark pyspark spark-dataframe graphframes
我试图找到最有效的方法从GraphFrames函数shortestPaths获取Map输出,并将每个顶点的距离映射平铺为新DataFrame中的各个行.通过将距离列拉入字典然后从那里转换为pandas数据帧然后转换回Spark数据帧,我已经能够非常笨拙地做到这一点,但我知道必须有更好的方法.
from graphframes import *
v = sqlContext.createDataFrame([
("a", "Alice", 34),
("b", "Bob", 36),
("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = sqlContext.createDataFrame([
("a", "b", "friend"),
("b", "c", "follow"),
("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
g = GraphFrame(v, e)
results = g.shortestPaths(landmarks=["a", "b","c"])
results.select("id","distances").show()
+---+--------------------+
| id| distances|
+---+--------------------+
| a|Map(a -> 0, b -> ...|
| b| Map(b -> 0, c -> 1)|
| c| Map(c -> 0, b -> 1)|
+---+--------------------+
Run Code Online (Sandbox Code Playgroud)
我想要的是取上面的输出并平整距离,同时保持id为这样的东西:
+---+---+---------+
| id| v | distance|
+---+---+---------+
| a| a | 0 |
| a| b | 1 |
| a| c | 2 |
| b| b | 0 |
| b| c | 1 |
| c| c | 0 |
| c| b | 1 |
+---+---+---------+
Run Code Online (Sandbox Code Playgroud)
谢谢.
小智 5
你可以爆炸:
>>> from pyspark.sql.functions import explode
>>> results.select("id", explode("distances"))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
917 次 |
| 最近记录: |