Spark 截断 Spark 计划

Ale*_*ire 8 scala apache-spark apache-spark-sql

我面临以下问题:打印执行计划时,我无法查看所有推送的过滤器。

执行的代码是

println(df.queryExecution.executedPlan.treeString(true))
Run Code Online (Sandbox Code Playgroud)

所有的计划都被打印出来,在Pushed filter字段中是这样的

 PushedFilters: [IsNotNull(X1), IsNotNull(X2), IsNotNull(X2), IsNotNull(X3..., ReadSchema: 
Run Code Online (Sandbox Code Playgroud)

您可能会注意到,它并没有完全打印出来。此外,为了解决这个问题,我修改了 spark-default.conf 中的以下属性

spark.debug.maxToStringFields    120000
Run Code Online (Sandbox Code Playgroud)

不幸的是,以前的并没有解决问题。

关于如何克服这个问题的任何建议?

ofo*_*ofo 5

这是目前硬编码[ 12 ]至最多100个字符作为火花3.0.1的是,但它是固定的与新引入的配置键最近spark.sql.maxMetadataStringLength缺省值为100。


Nee*_*nad 1

你可以df.explain(true)这样做,它将输出整个计划:

== Parsed Logical Plan ==
'SerializeFromObject [validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, x), IntegerType) AS x#67, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, y), IntegerType) AS y#68, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, z), IntegerType) AS z#69]
+- 'MapElements <function1>, interface org.apache.spark.sql.Row, [StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false)], obj#66: org.apache.spark.sql.Row
   +- 'DeserializeToObject unresolveddeserializer(createexternalrow(getcolumnbyordinal(0, IntegerType), getcolumnbyordinal(1, IntegerType), getcolumnbyordinal(2, IntegerType), StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false))), obj#65: org.apache.spark.sql.Row
      +- Filter isnull(y#9)
         +- Filter (x#8 = 0)
            +- Project [_1#4 AS x#8, _2#5 AS y#9, _3#6 AS z#10]
               +- SerializeFromObject [assertnotnull(assertnotnull(input[0, scala.Tuple3, true]))._1 AS _1#4, assertnotnull(assertnotnull(input[0, scala.Tuple3, true]))._2 AS _2#5, assertnotnull(assertnotnull(input[0, scala.Tuple3, true]))._3 AS _3#6]
                  +- ExternalRDD [obj#3]

== Analyzed Logical Plan ==
x: int, y: int, z: int
SerializeFromObject [validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, x), IntegerType) AS x#67, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, y), IntegerType) AS y#68, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, z), IntegerType) AS z#69]
+- MapElements <function1>, interface org.apache.spark.sql.Row, [StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false)], obj#66: org.apache.spark.sql.Row
   +- DeserializeToObject createexternalrow(x#8, y#9, z#10, StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false)), obj#65: org.apache.spark.sql.Row
      +- Filter isnull(y#9)
         +- Filter (x#8 = 0)
            +- Project [_1#4 AS x#8, _2#5 AS y#9, _3#6 AS z#10]
               +- SerializeFromObject [assertnotnull(assertnotnull(input[0, scala.Tuple3, true]))._1 AS _1#4, assertnotnull(assertnotnull(input[0, scala.Tuple3, true]))._2 AS _2#5, assertnotnull(assertnotnull(input[0, scala.Tuple3, true]))._3 AS _3#6]
                  +- ExternalRDD [obj#3]

== Optimized Logical Plan ==
SerializeFromObject [validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, x), IntegerType) AS x#67, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, y), IntegerType) AS y#68, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, z), IntegerType) AS z#69]
+- MapElements <function1>, interface org.apache.spark.sql.Row, [StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false)], obj#66: org.apache.spark.sql.Row
   +- DeserializeToObject createexternalrow(x#8, y#9, z#10, StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false)), obj#65: org.apache.spark.sql.Row
      +- LocalRelation <empty>, [x#8, y#9, z#10]

== Physical Plan ==
*(1) SerializeFromObject [validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, x), IntegerType) AS x#67, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, y), IntegerType) AS y#68, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, z), IntegerType) AS z#69]
+- *(1) MapElements <function1>, obj#66: org.apache.spark.sql.Row
   +- *(1) DeserializeToObject createexternalrow(x#8, y#9, z#10, StructField(x,IntegerType,false), StructField(y,IntegerType,false), StructField(z,IntegerType,false)), obj#65: org.apache.spark.sql.Row
      +- LocalTableScan <empty>, [x#8, y#9, z#10]
Run Code Online (Sandbox Code Playgroud)

  • 因为它不会打印所有推送的过滤器。此外,在您的物理计划中,我没有看到推送的过滤器。 (6认同)