Uri*_*ren 2 hadoop out-of-memory hadoop-streaming
我正在运行此命令:
hadoop jar hadoop-streaming.jar -D stream.tmpdir=/tmp -input "<input dir>" -output "<output dir>" -mapper "grep 20151026" -reducer "wc -l"
Run Code Online (Sandbox Code Playgroud)
哪个<input dir>目录有很多avro文件.
并收到此错误:
线程"main"中的异常java.lang.OutOfMemoryError:在org.apache.hadoop.hdfs.protocol.DatanodeID上的org.apache.hadoop.hdfs.protocol.DatanodeID.updateXferAddrAndInvalidateHashCode(DatanodeID.java:287)中超出了GC开销限制. (DatanodeID.java:91)在org.apache.hadoop.hdfs.protocol.DatanodeInfo.(DatanodeInfo.java:136)在org.apache.hadoop.hdfs.protocol.DatanodeInfo.(DatanodeInfo.java:122)在有机apache.hadoop.hdfs.protocolPB.PBHelper.convert在org.apache.hadoop.hdfs.protocolPB(PBHelper.java:633)在org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:793). PBHelper.convertLocatedBlock(PBHelper.java:1252)在org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1270)在org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java: 1413)在org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1524)在org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1533)在org.apache.hadoop .hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getL 在sun.reflect.GeneratedMethodAccessor3.invoke(未知来源)的sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java: 601)atg.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)位于com.sun.proxy的org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)位于org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969)的org.apache.hadoop.hdfs.DistributedFileSystem $ DirListingIterator.hasNextNoFilter(DistributedFileSystem.java:888)处的$ Proxy15.getListing(未知来源)at org.apache.hadoop.hdfs.DistributedFileSystem $ DirListingIterator.hasNext(DistributedFileSystem.java:863)位于org.apache.hadoop.mapred.FileInputFormat的org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:267). org.apache.hadoop.mapred.FileIn中的listStatus(FileInputFormat.java:228)org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624)的putFormat.getSplits(FileInputFormat.java:313)org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616)at org .apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)在org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1296)在org.apache.hadoop.mapreduce.Job $ 10.run (Job.java:1293)位于javax.security.auth.Subject.doAs的java.security.AccessController.doPrivileged(Native Method)(Subject.java:415)
如何解决这个问题?
花了一段时间,但我在这里找到了解决方案.
前面HADOOP_CLIENT_OPTS="-Xmx1024M"的命令可以解决问题.
最后的命令行是:
HADOOP_CLIENT_OPTS="-Xmx1024M" hadoop jar hadoop-streaming.jar -D stream.tmpdir=/tmp -input "<input dir>" -output "<output dir>" -mapper "grep 20151026" -reducer "wc -l"
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3852 次 |
| 最近记录: |