如何在pyspark中设置spark.sql.parquet.output.committer.class

KFB*_*KFB 5 python apache-spark parquet pyspark pyspark-sql

我正在尝试设置spark.sql.parquet.output.committer.class,我所做的一切似乎都没有让设置生效.

我正在尝试让许多线程写入相同的输出文件夹,这将起作用,org.apache.spark.sql. parquet.DirectParquetOutputCommitter因为它不会使用该_temporary文件夹.我收到以下错误,这就是我知道它不起作用的方式:

Caused by: java.io.FileNotFoundException: File hdfs://path/to/stuff/_temporary/0/task_201606281757_0048_m_000029/some_dir does not exist.
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:795)
        at org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:849)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:382)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:384)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:326)
        at org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)
        at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:230)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
Run Code Online (Sandbox Code Playgroud)

请注意org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob默认类的调用.

我根据其他SO答案和搜索尝试了以下内容:

  1. sc._jsc.hadoopConfiguration().set(key, val) (这适用于像这样的设置parquet.enable.summary-metadata)
  2. dataframe.write.option(key, val).parquet
  3. 添加--conf "spark.hadoop.spark.sql.parquet.output.committer.class=org.apache.spark.sql.parquet.DirectParquetOutputCommitter"spark-submit通话
  4. 添加--conf "spark.sql.parquet.output.committer.class"=" org.apache.spark.sql.parquet.DirectParquetOutputCommitter"spark-submit通话.

这就是我所能找到的,没有任何作用.看起来在Scala中设置并不难,但在Python中看起来不可能.

KFB*_*KFB 4

此评论中的方法绝对对我有用:

16/06/28 18:49:59 INFO ParquetRelation: Using user defined output committer for Parquet: org.apache.spark.sql.execution.datasources.parquet.DirectParquetOutputCommitter
Run Code Online (Sandbox Code Playgroud)

这是 Spark 给出的洪水中丢失的日志消息,与我看到的错误无关。无论如何,这一切都毫无意义,因为 DirectParquetOutputCommitter 已从Spark 中删除