Dzm*_*rka 3 apache-spark apache-spark-sql apache-spark-dataset
Spark具有SQL函数percentile_approx()
,与Scala对应的是SQL df.stat.approxQuantile()
。
但是,Scala副本不能用于分组数据集,例如df.groupby("foo").stat.approxQuantile()
,在此处回答:https : //stackoverflow.com/a/51933027。
但是可以在SQL语法中进行分组和百分位。所以我想知道,是否可以从SQL percentile_approx
函数定义UDF 并将其用于分组数据集?
虽然您不能approxQuantile
在UDF中使用,而且您没有Scala包装器,percentile_approx
但您自己实现它并不难:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.Column
import org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile
object PercentileApprox {
def percentile_approx(col: Column, percentage: Column, accuracy: Column): Column = {
val expr = new ApproximatePercentile(
col.expr, percentage.expr, accuracy.expr
).toAggregateExpression
new Column(expr)
}
def percentile_approx(col: Column, percentage: Column): Column = percentile_approx(
col, percentage, lit(ApproximatePercentile.DEFAULT_PERCENTILE_ACCURACY)
)
}
Run Code Online (Sandbox Code Playgroud)
用法示例:
import PercentileApprox._
val df = (Seq.fill(100)("a") ++ Seq.fill(100)("b")).toDF("group").withColumn(
"value", when($"group" === "a", randn(1) + 10).otherwise(randn(3))
)
df.groupBy($"group").agg(percentile_approx($"value", lit(0.5))).show
Run Code Online (Sandbox Code Playgroud)
+-----+------------------------------------+
|group|percentile_approx(value, 0.5, 10000)|
+-----+------------------------------------+
| b| -0.06336346702250675|
| a| 9.818985618591595|
+-----+------------------------------------+
Run Code Online (Sandbox Code Playgroud)
df.groupBy($"group").agg(percentile_approx($"value", typedLit(Seq(0.1, 0.25, 0.75, 0.9)))).show(false)
Run Code Online (Sandbox Code Playgroud)
+-----+----------------------------------------------------------------------------------+
|group|percentile_approx(value, [0.1,0.25,0.75,0.9], 10000) |
+-----+----------------------------------------------------------------------------------+
|b |[-1.2098351202406483, -0.6640768986666159, 0.6778253126144265, 1.3255676906697658]|
|a |[8.902067202468098, 9.290417382259626, 10.41767257153993, 11.067087075488068] |
+-----+----------------------------------------------------------------------------------+
Run Code Online (Sandbox Code Playgroud)
将其放置在JVM类路径上后,您还可以使用类似于内置函数的逻辑来添加PySpark包装器。