Iva*_*van 23 python apache-spark pyspark
我正在尝试按星期在Spark数据框中进行分组,并为每个组计算一列的唯一值:
test.json
{"name":"Yin", "address":1111111, "date":20151122045510}
{"name":"Yin", "address":1111111, "date":20151122045501}
{"name":"Yln", "address":1111111, "date":20151122045500}
{"name":"Yun", "address":1111112, "date":20151122065832}
{"name":"Yan", "address":1111113, "date":20160101003221}
{"name":"Yin", "address":1111111, "date":20160703045231}
{"name":"Yin", "address":1111114, "date":20150419134543}
{"name":"Yen", "address":1111115, "date":20151123174302}
Run Code Online (Sandbox Code Playgroud)
和代码:
import pyspark.sql.funcions as func
from pyspark.sql.types import TimestampType
from datetime import datetime
df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))
df_g.count().distinct().show()
Run Code Online (Sandbox Code Playgroud)
pyspark的结果是
df_y.groupby(df_y.name).count().distinct().show()
+----+-----+
|name|count|
+----+-----+
| Yan| 1|
| Yun| 1|
| Yin| 4|
| Yen| 1|
| Yln| 1|
+----+-----+
Run Code Online (Sandbox Code Playgroud)
而我对大熊猫的期待是这样的:
df = df_y.toPandas()
df.groupby('name').address.nunique()
Out[51]:
name
Yan 1
Yen 1
Yin 2
Yln 1
Yun 1
Run Code Online (Sandbox Code Playgroud)
如何通过其他字段获取每个组的唯一元素,例如地址?
Iva*_*van 35
有一种方法可以使用函数对每组的不同元素进行计数countDistinct:
import pyspark.sql.functions as func
from pyspark.sql.types import TimestampType
from datetime import datetime
df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))
df_y.groupby(df_y.name).agg(func.countDistinct('address')).show()
+----+--------------+
|name|count(address)|
+----+--------------+
| Yan| 1|
| Yun| 1|
| Yin| 2|
| Yen| 1|
| Yln| 1|
+----+--------------+
Run Code Online (Sandbox Code Playgroud)
文档可在[这里](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/functions.html#countDistinct ( org.apache.spark.sql).列,org.apache.spark.sql.Column ...)).
对 groupby 字段“_c1”的简洁直接回答,并计算字段“_c2”中不同值的数量:
import pyspark.sql.functions as F
dg = df.groupBy("_c1").agg(F.countDistinct("_c2"))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
34698 次 |
| 最近记录: |