我一直在研究Spark如何在Parquet中存储统计信息(最小/最大)以及它如何使用信息进行查询优化.我有几个问题.首先设置:Spark 2.1.0,下面设置一个1000行的Dataframe,一个long类型和一个字符串类型列.但是,它们按不同的列排序.
scala> spark.sql("select id, cast(id as string) text from range(1000)").sort("id").write.parquet("/secret/spark21-sortById")
scala> spark.sql("select id, cast(id as string) text from range(1000)").sort("Text").write.parquet("/secret/spark21-sortByText")
Run Code Online (Sandbox Code Playgroud)
我在镶木地板工具中添加了一些代码来打印出统计数据并检查生成的镶木地板文件:
hadoop jar parquet-tools-1.9.1-SNAPSHOT.jar meta /secret/spark21-sortById/part-00000-39f7ac12-6038-46ee-b5c3-d7a5a06e4425.snappy.parquet
file: file:/secret/spark21-sortById/part-00000-39f7ac12-6038-46ee-b5c3-d7a5a06e4425.snappy.parquet
creator: parquet-mr version 1.8.1 (build 4aba4dae7bb0d4edbcf7923ae1339f28fd3f7fcf)
extra: org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"id","type":"long","nullable":false,"metadata":{}},{"name":"text","type":"string","nullable":false,"metadata":{}}]}
file schema: spark_schema
--------------------------------------------------------------------------------
id: REQUIRED INT64 R:0 D:0
text: REQUIRED BINARY O:UTF8 R:0 D:0
row group 1: RC:5 TS:133 OFFSET:4
--------------------------------------------------------------------------------
id: INT64 SNAPPY DO:0 FPO:4 SZ:71/81/1.14 VC:5 ENC:PLAIN,BIT_PACKED STA:[min: 0, max: 4, num_nulls: 0]
text: BINARY SNAPPY DO:0 FPO:75 SZ:53/52/0.98 …Run Code Online (Sandbox Code Playgroud)