python中的Hadoop Streaming Job失败错误

db4*_*b42 21 python hadoop mapreduce

本指南中,我已经成功运行了样本练习.但是在运行我的mapreduce作业时,我 从日志文件中收到以下错误错误
ERROR streaming.StreamJob: Job not Successful!
10/12/16 17:13:38 INFO streaming.StreamJob: killJob...
Streaming Job Failed!

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Run Code Online (Sandbox Code Playgroud)

Mapper.py

import sys

i=0

for line in sys.stdin:
    i+=1
    count={}
    for word in line.strip().split():
        count[word]=count.get(word,0)+1
    for word,weight in count.items():
        print '%s\t%s:%s' % (word,str(i),str(weight))
Run Code Online (Sandbox Code Playgroud)

Reducer.py

import sys

keymap={}
o_tweet="2323"
id_list=[]
for line in sys.stdin:
    tweet,tw=line.strip().split()
    #print tweet,o_tweet,tweet_id,id_list
    tweet_id,w=tw.split(':')
    w=int(w)
    if tweet.__eq__(o_tweet):
        for i,wt in id_list:
            print '%s:%s\t%s' % (tweet_id,i,str(w+wt))
        id_list.append((tweet_id,w))
    else:
        id_list=[(tweet_id,w)]
        o_tweet=tweet
Run Code Online (Sandbox Code Playgroud)

[edit]命令来运行作业:

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper /home/hadoop/mapper.py -file /home/hadoop/reducer.py -reducer /home/hadoop/reducer.py -input my-input/* -output my-output
Run Code Online (Sandbox Code Playgroud)

输入是任意随机的句子序列.

谢谢,

Joe*_*ein 21

您的-mapper和-reducer应该只是脚本名称.

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-0.20.0-streaming.jar -file /home/hadoop/mapper.py -mapper mapper.py -file /home/hadoop/reducer.py -reducer reducer.py -input my-input/* -output my-output
Run Code Online (Sandbox Code Playgroud)

当您的脚本位于hdfs中另一个文件夹中的作业时,该文件夹与执行为"."的尝试任务相关.(仅供参考,如果您想要另一个文件,例如查找表,您可以在Python中打开它,就好像它与您的脚本位于M/R作业中的脚本一样)

还要确保你有chmod a + x mapper.py和chmod a + x reducer.py

  • 尝试将#!/ usr/bin/env python添加到python脚本的顶部,它应该能够通过执行cat data.file | ./mapper.py | sort | ./reducer.py从命令行执行.如果没有文件顶部的"#!/ usr/bin/env python",它就不会有 (8认同)

Mar*_*n W 15

尝试添加

 #!/usr/bin/env python
Run Code Online (Sandbox Code Playgroud)

你的脚本的顶部.

要么,

-mapper 'python m.py' -reducer 'r.py'
Run Code Online (Sandbox Code Playgroud)