ver*_*cla 1 apache-spark pyspark pyspark-sql pyspark-dataframes
我有一个 pyspark 数据框:
id   |   column
------------------------------
1    |  [0.2, 2, 3, 4, 3, 0.5]
------------------------------
2    |  [7, 0.3, 0.3, 8, 2,]
------------------------------
Run Code Online (Sandbox Code Playgroud)
我想创建一个 3 列:
Column 1: 包含元素之和 < 2Column 2: 包含元素之和 > 2Column 3: 包含元素的总和 = 2(有时我有重复的值,所以我计算它们的总和)如果我没有值,我将其设为 null。期待结果:
id   |   column               |  column<2 |  column>2   | column=2 
------------------------------|--------------------------------------------  
1    |  [0.2, 2, 3, 4, 3, 0.5]|  [0.7]    |  [12]       |  null
---------------------------------------------------------------------------
2    |  [7, 0.3, 0.3, 8, 2,]  | [0.6]     |  [15]       |  [2]
---------------------------------------------------------------------------
Run Code Online (Sandbox Code Playgroud)
你能帮我吗 ?谢谢
For Spark 2.4+, you can use aggregate and filter higher-order functions like this:
df.withColumn("column<2", expr("aggregate(filter(column, x -> x < 2), 0D, (x, acc) -> acc + x)")) \
  .withColumn("column>2", expr("aggregate(filter(column, x -> x > 2), 0D, (x, acc) -> acc + x)")) \
  .withColumn("column=2", expr("aggregate(filter(column, x -> x == 2), 0D, (x, acc) -> acc + x)")) \
  .show(truncate=False)
Run Code Online (Sandbox Code Playgroud)
Gives:
+---+------------------------------+--------+--------+--------+
|id |column                        |column<2|column>2|column=2|
+---+------------------------------+--------+--------+--------+
|1  |[0.2, 2.0, 3.0, 4.0, 3.0, 0.5]|0.7     |10.0    |2.0     |
|2  |[7.0, 0.3, 0.3, 8.0, 2.0]     |0.6     |15.0    |2.0     |
+---+------------------------------+--------+--------+--------+
Run Code Online (Sandbox Code Playgroud)
        |   归档时间:  |  
           
  |  
        
|   查看次数:  |  
           2471 次  |  
        
|   最近记录:  |