pyspark:在窗口上统计

Bob*_*ain 15 count distinct-values window-functions pyspark

我刚尝试在窗口上执行countDistinct并出现此错误:

AnalysisException: u'Distinct window functions are not supported: count(distinct color#1926)
Run Code Online (Sandbox Code Playgroud)

有没有办法在pyspark中对窗口进行明确计数?

这是一些示例代码:

from pyspark.sql.window import Window    
from pyspark.sql import functions as F

#function to calculate number of seconds from number of days
days = lambda i: i * 86400

df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00", "orange"),
                    (13, "2017-03-15T12:27:18+00:00", "red"),
                    (25, "2017-03-18T11:27:18+00:00", "red")],
                    ["dollars", "timestampGMT", "color"])

df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))

#create window by casting timestamp to long (number of seconds)
w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))

df = df.withColumn('distinct_color_count_over_the_last_week', F.countDistinct("color").over(w))

df.show()
Run Code Online (Sandbox Code Playgroud)

这是我想看到的输出:

+-------+--------------------+------+---------------------------------------+
|dollars|        timestampGMT| color|distinct_color_count_over_the_last_week|
+-------+--------------------+------+---------------------------------------+
|     17|2017-03-10 15:27:...|orange|                                      1|
|     13|2017-03-15 12:27:...|   red|                                      2|
|     25|2017-03-18 11:27:...|   red|                                      1|
+-------+--------------------+------+---------------------------------------+
Run Code Online (Sandbox Code Playgroud)

Bob*_*ain 39

我发现我可以使用collect_set和size函数的组合来模拟窗口上countDistinct的功能:

from pyspark.sql.window import Window
from pyspark.sql import functions as F

#function to calculate number of seconds from number of days
days = lambda i: i * 86400

#create some test data
df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00", "orange"),
                    (13, "2017-03-15T12:27:18+00:00", "red"),
                    (25, "2017-03-18T11:27:18+00:00", "red")],
                    ["dollars", "timestampGMT", "color"])

#convert string timestamp to timestamp type             
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))

#create window by casting timestamp to long (number of seconds)
w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))

#use collect_set and size functions to perform countDistinct over a window
df = df.withColumn('distinct_color_count_over_the_last_week', F.size(F.collect_set("color").over(w)))

df.show()
Run Code Online (Sandbox Code Playgroud)

这导致前一周记录的颜色数量明显不同:

+-------+--------------------+------+---------------------------------------+
|dollars|        timestampGMT| color|distinct_color_count_over_the_last_week|
+-------+--------------------+------+---------------------------------------+
|     17|2017-03-10 15:27:...|orange|                                      1|
|     13|2017-03-15 12:27:...|   red|                                      2|
|     25|2017-03-18 11:27:...|   red|                                      1|
+-------+--------------------+------+---------------------------------------+
Run Code Online (Sandbox Code Playgroud)

  • 有趣的。我一直在使用的解决方法是在聚合中使用 `countDistinct` 执行 `groupBy`,然后通过 `join` 返回到分组的原始 DataFrame。我想知道哪种方法对于大型集群更有效? (2认同)

nol*_*eto 9

@Bob Swain的回答很好,而且行得通!从那时起,Spark版本2.1开始,Spark提供了等效的countDistinct功能,approx_count_distinct其使用效率更高,最重要的是,它支持在窗口上进行计数。

下面是替换代码:

#approx_count_distinct supports a window
df = df.withColumn('distinct_color_count_over_the_last_week', F.approx_count_distinct("color").over(w))
Run Code Online (Sandbox Code Playgroud)

对于基数较小的列,结果应与“ countDistinct”相同。当数据集增长很多时,您应该考虑调整参数rsd-允许的最大估计误差,这使您可以调整权衡的精度/性能。

  • 结果应该与“countDistinct”相同-对此有任何保证吗?如果我使用默认 rsd = 0.05 是否意味着对于基数 < 20 它将 100% 地返回正确的结果? (3认同)