我有一个像这样的数据框
a = spark.createDataFrame([['Alice', '2020-03-03', '1'], ['Bob', '2020-03-03', '1'], ['Bob', '2020-03-05', '2']], ['name', 'dt', 'hits'])
a.show()
+-----+----------+----+
| name| dt|hits|
+-----+----------+----+
|Alice|2020-03-03| 1|
| Bob|2020-03-03| 1|
| Bob|2020-03-05| 2|
+-----+----------+----+
Run Code Online (Sandbox Code Playgroud)
我想聚合 dt 并将列点击到地图中 -
+-----+------------------------------------+
| name| map |
+-----+------------------------------------+
|Alice| {'2020-03-03': 1, '2020-03-05':2}|
| Bob| {'2020-03-03': 1} |
+-----+------------------------------------+
Run Code Online (Sandbox Code Playgroud)
但这段代码抛出异常:
from pyspark.sql import functions as F
a = a.groupBy(F.col('name')).agg(F.create_map(F.col('dt'), F.col('hits')))
Py4JJavaError: An error occurred while calling o2920.agg.
: org.apache.spark.sql.AnalysisException: expression '`dt`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;;
Aggregate [name#1329], [name#1329, map(dt#1330, hits#1331) AS map(dt, hits)#1361]
+- LogicalRDD [name#1329, dt#1330, hits#1331], false
Run Code Online (Sandbox Code Playgroud)
我究竟做错了什么?
对于,您可以这样 spark2.4+使用:map_from_arrays
from pyspark.sql import functions as F
a.groupBy("name").agg(F.map_from_arrays(F.collect_list("dt"),\
F.collect_list("hits")).alias("map")).show(truncate=False)
#+-----+----------------------------------+
#|name |map |
#+-----+----------------------------------+
#|Bob |[2020-03-03 -> 1, 2020-03-05 -> 2]|
#|Alice|[2020-03-03 -> 1] |
#+-----+----------------------------------+
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1853 次 |
| 最近记录: |