Gau*_*sal 6 apache-spark pyspark sparkr
如何在Spark中汇总多个列?例如,在SparkR中,以下代码用于获取一列的总和,但如果我尝试获取两列的总和,df则会出现错误.
# Create SparkDataFrame
df <- createDataFrame(faithful)
# Use agg to sum total waiting times
head(agg(df, totalWaiting = sum(df$waiting)))
##This works
# Use agg to sum total of waiting and eruptions
head(agg(df, total = sum(df$waiting, df$eruptions)))
##This doesn't work
Run Code Online (Sandbox Code Playgroud)
SparkR或PySpark代码都可以使用.
小智 11
对于PySpark,如果您不想明确键入列:
from operator import add
from functools import reduce
new_df = df.withColumn('total',reduce(add, [F.col(x) for x in numeric_col_list]))
Run Code Online (Sandbox Code Playgroud)
您可以在 pyspark 中执行以下操作
>>> from pyspark.sql import functions as F
>>> df = spark.createDataFrame([("a",1,10), ("b",2,20), ("c",3,30), ("d",4,40)], ["col1", "col2", "col3"])
>>> df.groupBy("col1").agg(F.sum(df.col2+df.col3)).show()
+----+------------------+
|col1|sum((col2 + col3))|
+----+------------------+
| d| 44|
| c| 33|
| b| 22|
| a| 11|
+----+------------------+
Run Code Online (Sandbox Code Playgroud)
org.apache.spark.sql.functions.sum(Column e)
Run Code Online (Sandbox Code Playgroud)
聚合函数:返回表达式中所有值的总和。
正如你所看到的,sum只需要一列作为输入,所以sum(df$waiting, df$eruptions)不会工作。因为你想总结数字字段,你可以做sum(df("waiting") + df("eruptions"))。如果你想总结各个列的值,你可以df.agg(sum(df$waiting),sum(df$eruptions)).show
| 归档时间: |
|
| 查看次数: |
18363 次 |
| 最近记录: |