Mar*_*kNS 3 apache-spark pyspark
我试图通过减去均值并除以每列的stddev来标准化spark数据帧中多列的值。这是我到目前为止的代码:
from pyspark.sql import Row
from pyspark.sql.functions import stddev_pop, avg
df = spark.createDataFrame([Row(A=1, B=6), Row(A=2, B=7), Row(A=3, B=8),
Row(A=4, B=9), Row(A=5, B=10)])
exprs = [x - (avg(x)) / stddev_pop(x) for x in df.columns]
df.select(exprs).show()
Run Code Online (Sandbox Code Playgroud)
这给了我结果:
+------------------------------+------------------------------+
|(A - (avg(A) / stddev_pop(A)))|(B - (avg(B) / stddev_pop(B)))|
+------------------------------+------------------------------+
| null| null|
+------------------------------+------------------------------+
Run Code Online (Sandbox Code Playgroud)
我希望的地方:
+------------------------------+------------------------------+
|(A - (avg(A) / stddev_pop(A)))|(B - (avg(B) / stddev_pop(B)))|
+------------------------------+------------------------------+
| -1.414213562| -1.414213562|
| -0.707106781| -0.707106781|
| 0| 0|
| 0.707106781| 0.707106781|
| 1.414213562| 1.414213562|
+------------------------------+------------------------------+
Run Code Online (Sandbox Code Playgroud)
我相信我可以使用mllib中的StandardScaler类来做到这一点,但我更愿意在可能的情况下仅使用数据框API来做到这一点-如果只是作为学习练习。
借助这里的答案,我想到了:
from pyspark.sql.functions import stddev_pop, avg, broadcast
cols = df.columns
stats = (df.groupBy().agg(
*([stddev_pop(x).alias(x + '_stddev') for x in cols] +
[avg(x).alias(x + '_avg') for x in cols])))
df = df.join(broadcast(stats))
exprs = [(df[x] - df[x + '_avg']) / df[x + '_stddev'] for x in cols]
df.select(exprs).show()
+------------------------+------------------------+
|((A - A_avg) / A_stddev)|((B - B_avg) / B_stddev)|
+------------------------+------------------------+
| -1.414213562373095| -1.414213562373095|
| -0.7071067811865475| -0.7071067811865475|
| 0.0| 0.0|
| 0.7071067811865475| 0.7071067811865475|
| 1.414213562373095| 1.414213562373095|
+------------------------+------------------------+
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
4600 次 |
| 最近记录: |