fetcher中shuffle中的Hadoop错误:超过MAX_FAILED_UNIQUE_FETCHES

nik*_*aNS 5 hadoop mapreduce

我是hadoop的新手.我在虚拟盒子上设置了kerberos安全启用的hadoop集群(主设备和1个从设备).我试图从hadoop示例'pi'中找到一份工作.作业终止,错误超过MAX_FAILED_UNIQUE_FETCHES.我试图搜索这个错误,但在互联网上给出的解决方案似乎并不适合我.也许我错过了一些明显的东西.我甚至尝试从etc/hadoop/slaves文件中删除slave,看看作业是否只能在master上运行,但是同样的错误也会失败.以下是日志.我在64位Ubuntu 14.04虚拟机上运行它.任何帮助赞赏.

montauk@montauk-vmaster:/usr/local/hadoop$ sudo -u yarn bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar pi 2 10
Number of Maps  = 2
Samples per Map = 10
OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/05 12:04:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
14/06/05 12:04:49 INFO client.RMProxy: Connecting to ResourceManager at /192.168.0.29:8040
14/06/05 12:04:50 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 17 for yarn on 192.168.0.29:54310
14/06/05 12:04:50 INFO security.TokenCache: Got dt for hdfs://192.168.0.29:54310; Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.0.29:54310, Ident: (HDFS_DELEGATION_TOKEN token 17 for yarn)
14/06/05 12:04:50 INFO input.FileInputFormat: Total input paths to process : 2
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: number of splits:2
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1401975262053_0007
14/06/05 12:04:51 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: 192.168.0.29:54310, Ident: (HDFS_DELEGATION_TOKEN token 17 for yarn)
14/06/05 12:04:53 INFO impl.YarnClientImpl: Submitted application application_1401975262053_0007
14/06/05 12:04:53 INFO mapreduce.Job: The url to track the job: http://montauk-vmaster:8088/proxy/application_1401975262053_0007/
14/06/05 12:04:53 INFO mapreduce.Job: Running job: job_1401975262053_0007
14/06/05 12:05:29 INFO mapreduce.Job: Job job_1401975262053_0007 running in uber mode : false
14/06/05 12:05:29 INFO mapreduce.Job:  map 0% reduce 0%
14/06/05 12:06:04 INFO mapreduce.Job:  map 50% reduce 0%
14/06/05 12:06:06 INFO mapreduce.Job:  map 100% reduce 0%
14/06/05 12:06:34 INFO mapreduce.Job:  map 100% reduce 100%
14/06/05 12:06:34 INFO mapreduce.Job: Task Id : attempt_1401975262053_0007_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
    at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
    at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:323)
    at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:245)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:347)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
Run Code Online (Sandbox Code Playgroud)

Jia*_*Liu 4

当我使用 tarball 安装具有 kerberos 安全性的 cdh5.1.0 时,我遇到了与您相同的问题,谷歌找到的解决方案是内存不足,但我不认为这是我的情况,因为我的输入很小(52K)。

经过几天的挖掘,我在此链接中找到了根本原因。

总结该链接中的解决方案可以是:

  1. 在yarn-site.xml中添加以下属性,即使它是yarn-default.xml中的默认属性

    <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property>

  2. 删除属性yarn.nodemanager.local-dirs并使用默认值/tmp。然后执行以下命令:

    mkdir -p /tmp/hadoop-yarn/nm-local-dir chown yarn:yarn /tmp/hadoop-yarn/nm-local-dir

问题可以总结为:

设置 yarn.nodemanager.local-dirs property后,yarn-default.xml中的yarn.nodemanager.aux-services.mapreduce_shuffle.class属性不起作用。

根本原因我也没有找到。