我有一个由时间戳列和美元列组成的数据集.我想找到每行的平均美元数,在每行的时间戳结束.我最初看的是pyspark.sql.functions.window函数,但是按周分类数据.
这是一个例子:
%pyspark
import datetime
from pyspark.sql import functions as F
df1 = sc.parallelize([(17,"2017-03-11T15:27:18+00:00"), (13,"2017-03-11T12:27:18+00:00"), (21,"2017-03-17T11:27:18+00:00")]).toDF(["dollars", "datestring"])
df2 = df1.withColumn('timestampGMT', df1.datestring.cast('timestamp'))
w = df2.groupBy(F.window("timestampGMT", "7 days")).agg(F.avg("dollars").alias('avg'))
w.select(w.window.start.cast("string").alias("start"), w.window.end.cast("string").alias("end"), "avg").collect()
Run Code Online (Sandbox Code Playgroud)
这导致两条记录:
| start | end | avg |
|---------------------|----------------------|-----|
|'2017-03-16 00:00:00'| '2017-03-23 00:00:00'| 21.0|
|---------------------|----------------------|-----|
|'2017-03-09 00:00:00'| '2017-03-16 00:00:00'| 15.0|
|---------------------|----------------------|-----|
Run Code Online (Sandbox Code Playgroud)
窗口函数将时间序列数据分类,而不是执行滚动平均值.
有没有办法执行滚动平均值,我会回到每行的每周平均值,时间段结束于行的timestampGMT?
编辑:
张的答案接近我想要的,但不完全是我想看到的.
这是一个更好的例子来展示我想要得到的东西:
%pyspark
from pyspark.sql import functions as F
df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00"),
(13, "2017-03-15T12:27:18+00:00"),
(25, "2017-03-18T11:27:18+00:00")],
["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
df = df.withColumn('rolling_average', F.avg("dollars").over(Window.partitionBy(F.window("timestampGMT", …
Run Code Online (Sandbox Code Playgroud) 我有一个 spark 数据框,其中包含一段时间内某些商店中某些产品的销售预测数据。如何计算下 N 个值的窗口大小的预测的滚动总和?
输入数据
+-----------+---------+------------+------------+---+
| ProductId | StoreId | Date | Prediction | N |
+-----------+---------+------------+------------+---+
| 1 | 100 | 2019-07-01 | 0.92 | 2 |
| 1 | 100 | 2019-07-02 | 0.62 | 2 |
| 1 | 100 | 2019-07-03 | 0.89 | 2 |
| 1 | 100 | 2019-07-04 | 0.57 | 2 |
| 2 | 200 | 2019-07-01 | 1.39 | 3 |
| 2 | 200 | 2019-07-02 …
Run Code Online (Sandbox Code Playgroud)