我正在使用pyspark数据帧分析一些数据,假设我有一个df我正在聚合的数据帧:
df.groupBy("group")\
.agg({"money":"sum"})\
.show(100)
Run Code Online (Sandbox Code Playgroud)
这会给我:
group SUM(money#2L)
A 137461285853
B 172185566943
C 271179590646
Run Code Online (Sandbox Code Playgroud)
聚合工作正常,但我不喜欢新的列名"SUM(钱#2L)".有没有一种巧妙的方法可以将此列重命名为人类可读的.agg方法?也许更类似于人们会做的事情dplyr:
df %>% group_by(group) %>% summarise(sum_money = sum(money))
Run Code Online (Sandbox Code Playgroud) 具有如下数据框:
from pyspark.sql.functions import avg, first
rdd = sc.parallelize(
[
(0, "A", 223,"201603", "PORT"),
(0, "A", 22,"201602", "PORT"),
(0, "A", 422,"201601", "DOCK"),
(1,"B", 3213,"201602", "DOCK"),
(1,"B", 3213,"201601", "PORT"),
(2,"C", 2321,"201601", "DOCK")
]
)
df_data = sqlContext.createDataFrame(rdd, ["id","type", "cost", "date", "ship"])
df_data.show()
Run Code Online (Sandbox Code Playgroud)
我为此做一个重点
df_data.groupby(df_data.id, df_data.type).pivot("date").agg(avg("cost"), first("ship")).show()
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
| id|type|201601_avg(cost)|201601_first(ship)()|201602_avg(cost)|201602_first(ship)()|201603_avg(cost)|201603_first(ship)()|
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
| 2| C| 2321.0| DOCK| null| null| null| null|
| 0| A| 422.0| DOCK| 22.0| PORT| 223.0| PORT|
| 1| B| 3213.0| PORT| 3213.0| DOCK| null| null|
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
Run Code Online (Sandbox Code Playgroud)
但是我为这些列得到了这些非常复杂的名称。alias通常适用于聚合,但是由于 …