Arv*_*ula 9 apache-spark apache-spark-sql
我试图在spark中找到解决方案,用数组中的公共元素对数据进行分组.
key value
[k1,k2] v1
[k2] v2
[k3,k2] v3
[k4] v4
Run Code Online (Sandbox Code Playgroud)
如果任何元素在key中匹配,我们必须为其分配相同的groupid.(Groupby common element)
结果:
key value GroupID
[k1,k2] v1 G1
[k2] v2 G1
[k3,k2] v3 G1
[k4] v4 G2
Run Code Online (Sandbox Code Playgroud)
Spark Graphx已经提供了一些建议,但是此时学习曲线将更多地用于单个功能.
hi-*_*zir 12
包括graphframes(最新支持的Spark版本是2.1,但它也应该支持2.2,如果你使用更新,你必须用2.3补丁构建你自己的XXX版本)替换Spark版本和YYYScala版本:
spark.jars.packages graphframes:graphframes:0.5.0-sparkXXX-s_YYY
Run Code Online (Sandbox Code Playgroud)
添加爆炸键:
import org.apache.spark.sql.functions._
val df = Seq(
(Seq("k1", "k2"), "v1"), (Seq("k2"), "v2"),
(Seq("k3", "k2"), "v3"), (Seq("k4"), "v4")
).toDF("key", "value")
val edges = df.select(
explode($"key") as "src", $"value" as "dst")
Run Code Online (Sandbox Code Playgroud)
转换为graphframe:
import org.graphframes._
val gf = GraphFrame.fromEdges(edges)
Run Code Online (Sandbox Code Playgroud)
设置检查点目录(如果未设置):
import org.apache.spark.sql.SparkSession
val path: String = ???
val spark: SparkSession = ???
spark.sparkContext.setCheckpointDir(path)
Run Code Online (Sandbox Code Playgroud)
查找连接组件:
val components = GraphFrame.fromEdges(edges).connectedComponents.setAlgorithm("graphx").run
Run Code Online (Sandbox Code Playgroud)
使用输入数据加入结果:
val result = components.where($"id".startsWith("v")).toDF("value", "group").join(df, Seq("value"))
Run Code Online (Sandbox Code Playgroud)
检查结果:
result.show
// +-----+------------+--------+
// |value| group| key|
// +-----+------------+--------+
// | v3|489626271744|[k3, k2]|
// | v2|489626271744| [k2]|
// | v4|532575944704| [k4]|
// | v1|489626271744|[k1, k2]|
// +-----+------------+--------+
Run Code Online (Sandbox Code Playgroud)