对 Spark DataFrame 中的结构数组进行排序

Rap*_*oth 5 scala dataframe apache-spark

考虑以下数据框:

case class ArrayElement(id:Long,value:Double)

val df = Seq(
  Seq(
    ArrayElement(1L,-2.0),ArrayElement(2L,1.0),ArrayElement(0L,0.0)
  )
).toDF("arr")

df.printSchema

root
 |-- arr: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: long (nullable = false)
 |    |    |-- value: double (nullable = false)
Run Code Online (Sandbox Code Playgroud)

有没有一种方法排序arrvalue其他比使用UDF?

我已经看到org.apache.spark.sql.functions.sort_array,在复杂数组元素的情况下,这种方法实际上在做什么?它是否按第一个元素对数组进行排序(即id?)

Ram*_*jan 7

spark 函数表示“根据数组元素的自然顺序,按升序对给定列的输入数组进行排序。”

在我解释之前,让我们看一些 sort_array 功能的例子。

+----------------------------+----------------------------+
|arr                         |sorted                      |
+----------------------------+----------------------------+
|[[1,-2.0], [2,1.0], [0,0.0]]|[[0,0.0], [1,-2.0], [2,1.0]]|
+----------------------------+----------------------------+

+----------------------------+----------------------------+
|arr                         |sorted                      |
+----------------------------+----------------------------+
|[[0,-2.0], [2,1.0], [0,0.0]]|[[0,-2.0], [0,0.0], [2,1.0]]|
+----------------------------+----------------------------+

+-----------------------------+-----------------------------+
|arr                          |sorted                       |
+-----------------------------+-----------------------------+
|[[0,-2.0], [2,1.0], [-1,0.0]]|[[-1,0.0], [0,-2.0], [2,1.0]]|
+-----------------------------+-----------------------------+
Run Code Online (Sandbox Code Playgroud)

所以 sort_array 是通过检查第一个元素然后第二个元素进行排序,以对定义的列中数组中的每个元素进行检查

我希望它清楚