mel*_*mel 6 python apache-spark pyspark
我有以下两个数据帧,第一行看起来像:
['station_id', 'country', 'temperature', 'time']
['12', 'usa', '22', '12:04:14']
Run Code Online (Sandbox Code Playgroud)
我想按照'法国'中前100个站点的降序显示平均温度.
在pyspark中实现它的最佳方法(效率最高)是什么?
mto*_*oto 12
我们Spark SQL通过以下方式将您的查询翻译为:
from pyspark.sql.functions import mean, desc
df.filter(df["country"] == "france") \ # only french stations
.groupBy("station_id") \ # by station
.agg(mean("temperature").alias("average_temp")) \ # calculate average
.orderBy(desc("average_temp")) \ # order by average
.take(100) # return first 100 rows
Run Code Online (Sandbox Code Playgroud)
使用RDDAPI和匿名函数:
df.rdd \
.filter(lambda x: x[1] == "france") \ # only french stations
.map(lambda x: (x[0], x[2])) \ # select station & temp
.mapValues(lambda x: (x, 1)) \ # generate count
.reduceByKey(lambda x, y: (x[0]+y[0], x[1]+y[1])) \ # calculate sum & count
.mapValues(lambda x: x[0]/x[1]) \ # calculate average
.sortBy(lambda x: x[1], ascending = False) \ # sort
.take(100)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6972 次 |
| 最近记录: |