在Spark中重新组合/连接DataFrame行

Jac*_*cob 5 scala dataframe apache-spark apache-spark-sql apache-spark-ml

我有一个看起来像这样的DataFrame:

scala> data.show
+-----+---+---------+
|label| id| features|
+-----+---+---------+
|  1.0|  1|[1.0,2.0]|
|  0.0|  2|[5.0,6.0]|
|  1.0|  1|[3.0,4.0]|
|  0.0|  2|[7.0,8.0]|
+-----+---+---------+
Run Code Online (Sandbox Code Playgroud)

我想基于"id"重新组合功能,以便我可以得到以下内容:

scala> data.show
+---------+---+-----------------+
|    label| id| features        |
+---------+---+-----------------+
|  1.0,1.0|  1|[1.0,2.0,3.0,4.0]|
|  0.0,0.0|  2|[5.0,6.0,7.8,8.0]|
+---------+---+-----------------+
Run Code Online (Sandbox Code Playgroud)

这是我用来生成提到的DataFrame的代码

val rdd = sc.parallelize(List((1.0, 1, Vectors.dense(1.0, 2.0)), (0.0, 2, Vectors.dense(5.0, 6.0)), (1.0, 1, Vectors.dense(3.0, 4.0)), (0.0, 2, Vectors.dense(7.0, 8.0))))
val data = rdd.toDF("label", "id", "features")
Run Code Online (Sandbox Code Playgroud)

我一直在尝试使用RDD和DataFrames做不同的事情.迄今为止最"有希望"的方法是基于"id"过滤

data.filter($"id".equalTo(1))

+-----+---+---------+
|label| id| features|
+-----+---+---------+
|  1.0|  1|[1.0,2.0]|
|  1.0|  1|[3.0,4.0]|
+-----+---+---------+
Run Code Online (Sandbox Code Playgroud)

但我现在有两个瓶颈:

1)如何自动化"id"可能具有的所有不同值的过滤?

以下生成错误:

data.select("id").distinct.foreach(x => data.filter($"id".equalTo(x)))
Run Code Online (Sandbox Code Playgroud)

2)如何连接关于给定"id"的常见"特征".没有尝试过多,因为我仍然坚持1)

任何建议都非常受欢迎

注意:为了清楚起见,"label"对于每次出现的"id"始终是相同的.很抱歉混淆,我的任务的简单扩展也将分组"标签"(更新示例)

zer*_*323 6

我相信没有有效的方法来实现你想要的东西,额外的订单要求使得情况不会变得更好.我能想到的最干净的方式是groupByKey这样的:

import org.apache.spark.mllib.linalg.{Vectors, Vector}
import org.apache.spark.sql.functions.monotonicallyIncreasingId
import org.apache.spark.sql.Row
import org.apache.spark.rdd.RDD


val pairs: RDD[((Double, Int), (Long, Vector))] = data
  // Add row identifiers so we can keep desired order
  .withColumn("uid", monotonicallyIncreasingId)
  // Create PairwiseRDD where (label, id) is a key
  // and (row-id, vector is a value)
  .map{case Row(label: Double, id: Int, v: Vector, uid: Long) => 
    ((label, id), (uid, v))}

val rows = pairs.groupByKey.mapValues(xs => {
  val vs = xs
    .toArray
    .sortBy(_._1) // Sort by row id to keep order
    .flatMap(_._2.toDense.values) // flatmap vector values

  Vectors.dense(vs) // return concatenated vectors 

}).map{case ((label, id), v) => (label, id, v)} // Reshape

val grouped = rows.toDF("label", "id", "features")

grouped.show

// +-----+---+-----------------+
// |label| id|         features|
// +-----+---+-----------------+
// |  0.0|  2|[5.0,6.0,7.0,8.0]|
// |  1.0|  1|[1.0,2.0,3.0,4.0]|
// +-----+---+-----------------+
Run Code Online (Sandbox Code Playgroud)

也可以使用类似于我为SPARK SQL替换mysql GROUP_CONCAT聚合函数的UDAF,但它的效率甚至低于此.