Spark 提供 DataFrame groupBy 中所有列的列表

ale*_*oid 2 scala apache-spark apache-spark-sql

我需要按所有列对 DataFrame 进行分组,除了"tag"

现在我可以通过以下方式做到这一点:

unionDf.groupBy("name", "email", "phone", "country").agg(collect_set("tag").alias("tags"))
Run Code Online (Sandbox Code Playgroud)

是否可以获取所有列(除了"tag")并将它们传递给groupBy方法,而无需像我现在那样对它们进行硬编码 - ” name", "email", "phone", "country"

我尝试过unionDf.groupBy(unionDf.columns),但没用

Leo*_*o C 6

这是一种方法:

import org.apache.spark.sql.functions._

val df = Seq(
  ("a", "b@c.com", "123", "US", "ab1"),
  ("a", "b@c.com", "123", "US", "ab2"),
  ("d", "e@f.com", "456", "US", "de1")
).toDF("name", "email", "phone", "country", "tag")

val groupCols = df.columns.diff(Seq("tag"))

df.groupBy(groupCols.map(col): _*).agg(collect_set("tag").alias("tags")).show
// +----+-------+-----+-------+----------+
// |name|  email|phone|country|      tags|
// +----+-------+-----+-------+----------+
// |   d|e@f.com|  456|     US|     [de1]|
// |   a|b@c.com|  123|     US|[ab2, ab1]|
// +----+-------+-----+-------+----------+
Run Code Online (Sandbox Code Playgroud)