pan*_*sen 8 python dataframe directed-acyclic-graphs apache-spark pyspark
在最近的SO-post 中,我发现withColumn在处理堆叠/链列表达式以及不同的窗口规范时,使用可能会改进 DAG。然而,在这个例子中,withColumn实际上使 DAG 变得更糟,并且与使用select相反的结果不同。
首先,一些测试数据(PySpark 2.4.4 独立版):
import pandas as pd
import numpy as np
from pyspark.sql import SparkSession, Window
from pyspark.sql import functions as F
spark = SparkSession.builder.getOrCreate()
dfp = pd.DataFrame(
{
"col1": np.random.randint(0, 5, size=100),
"col2": np.random.randint(0, 5, size=100),
"col3": np.random.randint(0, 5, size=100),
"col4": np.random.randint(0, 5, size=100),
"col5": np.random.randint(0, 5, size=100),
}
)
df = spark.createDataFrame(dfp)
df.show(5)
+----+----+----+----+----+
|col1|col2|col3|col4|col5|
+----+----+----+----+----+
| 0| 3| 2| 2| 2|
| 1| 3| 3| 2| 4|
| 0| 0| 3| 3| 2|
| 3| 0| 1| 4| 4|
| 4| 0| 3| 3| 3|
+----+----+----+----+----+
only showing top 5 rows
Run Code Online (Sandbox Code Playgroud)
这个例子很简单。In 包含 2 个窗口规范和 4 个基于它们的独立列表达式:
w1 = Window.partitionBy("col1").orderBy("col2")
w2 = Window.partitionBy("col3").orderBy("col4")
col_w1_1 = F.max("col5").over(w1).alias("col_w1_1")
col_w1_2 = F.sum("col5").over(w1).alias("col_w1_2")
col_w2_1 = F.max("col5").over(w2).alias("col_w2_1")
col_w2_2 = F.sum("col5").over(w2).alias("col_w2_2")
expr = [col_w1_1, col_w1_2, col_w2_1, col_w2_2]
Run Code Online (Sandbox Code Playgroud)
如果withColumn与交替窗口规范一起使用,则 DAG 会创建不必要的洗牌:
df.withColumn("col_w1_1", col_w1_1)\
.withColumn("col_w2_1", col_w2_1)\
.withColumn("col_w1_2", col_w1_2)\
.withColumn("col_w2_2", col_w2_2)\
.explain()
== Physical Plan ==
Window [sum(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_2#147L], [col3#90L], [col4#91L ASC NULLS FIRST]
+- *(4) Sort [col3#90L ASC NULLS FIRST, col4#91L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(col3#90L, 200)
+- Window [sum(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_2#143L], [col1#88L], [col2#89L ASC NULLS FIRST]
+- *(3) Sort [col1#88L ASC NULLS FIRST, col2#89L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(col1#88L, 200)
+- Window [max(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_1#145L], [col3#90L], [col4#91L ASC NULLS FIRST]
+- *(2) Sort [col3#90L ASC NULLS FIRST, col4#91L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(col3#90L, 200)
+- Window [max(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_1#141L], [col1#88L], [col2#89L ASC NULLS FIRST]
+- *(1) Sort [col1#88L ASC NULLS FIRST, col2#89L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(col1#88L, 200)
+- Scan ExistingRDD[col1#88L,col2#89L,col3#90L,col4#91L,col5#92L]
Run Code Online (Sandbox Code Playgroud)
如果所有列都带有select,则 DAG 是正确的。
df.select("*", *expr).explain()
== Physical Plan ==
Window [max(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_1#119L, sum(col5#92L) windowspecdefinition(col3#90L, col4#91L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w2_2#121L], [col3#90L], [col4#91L ASC NULLS FIRST]
+- *(2) Sort [col3#90L ASC NULLS FIRST, col4#91L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(col3#90L, 200)
+- Window [max(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_1#115L, sum(col5#92L) windowspecdefinition(col1#88L, col2#89L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS col_w1_2#117L], [col1#88L], [col2#89L ASC NULLS FIRST]
+- *(1) Sort [col1#88L ASC NULLS FIRST, col2#89L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(col1#88L, 200)
+- Scan ExistingRDD[col1#88L,col2#89L,col3#90L,col4#91L,col5#92L]
Run Code Online (Sandbox Code Playgroud)
有一些关于为什么应该避免的现有信息withColumn,但是他们主要关注的withColumn是多次调用,并且没有解决偏离 DAG 的问题(请参阅此处和此处)。有谁知道为什么 DAG 在withColumn和之间有所不同select?Spark 的优化算法应该适用于任何情况,并且不应该依赖于不同的方式来表达完全相同的东西。
提前致谢。