亚马逊EMR上的Pydoop

jld*_*ont 8 python hadoop amazon-web-services amazon-emr

我如何在Amazon EMR上使用Pydoop

我试着用谷歌搜索这个话题无济于事:这有可能吗?

Nat*_*ert 8

我终于搞定了这个.一切都发生在主节点上...作为用户hadoop ssh到该节点

你需要一些包裹:

sudo easy_install argparse importlib
sudo apt-get update
sudo apt-get install libboost-python-dev
Run Code Online (Sandbox Code Playgroud)

建立东西:

wget http://apache.mirrors.pair.com/hadoop/common/hadoop-0.20.205.0/hadoop-0.20.205.0.tar.gz
wget http://sourceforge.net/projects/pydoop/files/Pydoop-0.6/pydoop-0.6.0.tar.gz
tar xvf hadoop-0.20.205.0.tar.gz
tar xvf pydoop-0.6.0.tar.gz

export JAVA_HOME=/usr/lib/jvm/java-6-sun 
export JVM_ARCH=64 # I assume that 32 works for 32-bit systems
export HADOOP_HOME=/home/hadoop
export HADOOP_CPP_SRC=/home/hadoop/hadoop-0.20.205.0/src/c++/
export HADOOP_VERSION=0.20.205
export HDFS_LINK=/home/hadoop/hadoop-0.20.205.0/src/c++/libhdfs/

cd ~/hadoop-0.20.205.0/src/c++/libhdfs
sh ./configure
make
make install
cd ../install
tar cvfz ~/libhdfs.tar.gz lib
sudo tar xvf ~/libhdfs.tar.gz -C /usr

cd ~/pydoop-0.6.0
python setup.py bdist
cp dist/pydoop-0.6.0.linux-x86_64.tar.gz ~/
sudo tar xvf ~/pydoop-0.6.0.linux-x86_64.tar.gz -C /
Run Code Online (Sandbox Code Playgroud)

保存两个tarball,将来,您可以跳过构建部分,只需执行以下操作即可安装(需要弄清楚如何在多节点集群上安装boostrap选项)

sudo tar xvf ~/libhdfs.tar.gz -C /usr
sudo tar xvf ~/pydoop-0.6.0.linux-x86_64.tar.gz -C /
Run Code Online (Sandbox Code Playgroud)

然后,我可以使用完整的Hadoop API运行示例程序 (在修复构造函数中的错误后调用它super(WordCountMapper, self)).

#!/usr/bin/python

import pydoop.pipes as pp

class WordCountMapper(pp.Mapper):

  def __init__(self, context):
    super(WordCountMapper, self).__init__(context)
    context.setStatus("initializing")
    self.input_words = context.getCounter("WORDCOUNT", "INPUT_WORDS")

  def map(self, context):
    words = context.getInputValue().split()
    for w in words:
      context.emit(w, "1")
    context.incrementCounter(self.input_words, len(words))

class WordCountReducer(pp.Reducer):

  def reduce(self, context):
    s = 0
    while context.nextValue():
      s += int(context.getInputValue())
    context.emit(context.getInputKey(), str(s))

pp.runTask(pp.Factory(WordCountMapper, WordCountReducer))
Run Code Online (Sandbox Code Playgroud)

我将该程序上传到存储桶并将其称为运行.然后我使用了以下conf.xml:

<?xml version="1.0"?>
<configuration>

<property>
  <name>hadoop.pipes.executable</name>
  <value>s3://<my bucket>/run</value>
</property>

<property>
  <name>mapred.job.name</name>
  <value>myjobname</value>
</property>

<property>
  <name>hadoop.pipes.java.recordreader</name>
  <value>true</value>
</property>

<property>
  <name>hadoop.pipes.java.recordwriter</name>
  <value>true</value>
</property>

</configuration>
Run Code Online (Sandbox Code Playgroud)

最后,我使用了以下命令行:

hadoop pipes -conf conf.xml -input s3://elasticmapreduce/samples/wordcount/input -output s3://tmp.nou/asdf
Run Code Online (Sandbox Code Playgroud)