如何在Hadoop中使用CompressionCodec

Piy*_*sal 2 java compression hadoop mapreduce

我正在做以下操作来从reducer压缩o/p文件:

OutputStream out = ipFs.create( new Path( opDir + "/" + fileName ) );
CompressionCodec codec = new GzipCodec(); 
OutputStream cs = codec.createOutputStream( out );
BufferedWriter cout = new BufferedWriter( new OutputStreamWriter( cs ) );
cout.write( ... )
Run Code Online (Sandbox Code Playgroud)

但是在第3行得到了空指针异常:

java.lang.NullPointerException
    at org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63)
    at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:92)
    at myFile$myReduce.reduce(myFile.java:354)
Run Code Online (Sandbox Code Playgroud)

我也跟着JIRA一样.

你能否建议我做错了什么?

Chr*_*ite 8

如果要在标准的OutputFormat处理之外使用压缩,则应使用CompressionCodecFactory(详见@linker答案):

CompressionCodecFactory ccf = new CompressionCodecFactory(conf)
CompressionCodec codec = ccf.getCodecByClassName(GzipCodec.class.getName());
OutputStream compressedOutputSream = codec.createOutputStream(outputStream)
Run Code Online (Sandbox Code Playgroud)