小编Car*_*hen的帖子

Pyspark:滚动窗口中的聚合模式(最常见)值

我有一个如下所示的数据框。我想在每个组内进行分组device和排序start_time。然后,对于组中的每一行,从其前面 3 行(包括其自身)的窗口中获取最常出现的站点。

columns = ['device', 'start_time', 'station']
data = [("Python", 1, "station_1"), ("Python", 2, "station_2"), ("Python", 3, "station_1"), ("Python", 4, "station_2"), ("Python", 5, "station_2"), ("Python", 6, None)]


test_df = spark.createDataFrame(data).toDF(*columns)
rolling_w = Window.partitionBy('device').orderBy('start_time').rowsBetween(-2, 0)
Run Code Online (Sandbox Code Playgroud)

期望的输出:

columns = ['device', 'start_time', 'station']
data = [("Python", 1, "station_1"), ("Python", 2, "station_2"), ("Python", 3, "station_1"), ("Python", 4, "station_2"), ("Python", 5, "station_2"), ("Python", 6, None)]


test_df = spark.createDataFrame(data).toDF(*columns)
rolling_w = Window.partitionBy('device').orderBy('start_time').rowsBetween(-2, 0)
Run Code Online (Sandbox Code Playgroud)

由于 Pyspark 没有mode()函数,我知道如何获取静态中最常见的值,groupby如下所示, …

group-by apache-spark apache-spark-sql rolling-computation pyspark

1
推荐指数
1
解决办法
2657
查看次数