从本地Hadoop到Amazon S3的DistCp

Moh*_*med 5 hadoop amazon-s3

我正在尝试使用distcp将文件夹从我的本地hadoop集群(cdh4)复制到我的Amazon S3存储桶.

我使用以下命令:

hadoop distcp -log /tmp/distcplog-s3/ hdfs://nameserv1/tmp/data/sampledata  s3n://hdfsbackup/
Run Code Online (Sandbox Code Playgroud)

hdfsbackup是我的Amazon S3 Bucket的名称.

DistCp因未知主机异常而失败:

13/05/31 11:22:33 INFO tools.DistCp: srcPaths=[hdfs://nameserv1/tmp/data/sampledata]
13/05/31 11:22:33 INFO tools.DistCp: destPath=s3n://hdfsbackup/
        No encryption was performed by peer.
        No encryption was performed by peer.
13/05/31 11:22:35 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 54 for hadoopuser on ha-hdfs:nameserv1
13/05/31 11:22:35 INFO security.TokenCache: Got dt for hdfs://nameserv1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameserv1, Ident: (HDFS_DELEGATION_TOKEN token 54 for hadoopuser)
        No encryption was performed by peer.
java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfsbackup
    at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
    at org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:295)
    at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:282)
    at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:503)
    at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
    at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
    at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
    at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
    at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1046)
    at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666)
    at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Caused by: java.net.UnknownHostException: hdfsbackup
    ... 14 more
Run Code Online (Sandbox Code Playgroud)

我在所有节点的core-site.xml中配置了AWS ID/Secret.

<!-- Amazon S3 -->
<property>
    <name>fs.s3.awsAccessKeyId</name>
    <value>MY-ID</value>
</property>

<property>
    <name>fs.s3.awsSecretAccessKey</name>
    <value>MY-SECRET</value>
</property>


<!-- Amazon S3N -->
<property>
    <name>fs.s3n.awsAccessKeyId</name>
    <value>MY-ID</value>
</property>

<property>
    <name>fs.s3n.awsSecretAccessKey</name>
    <value>MY-SECRET</value>
</property>
Run Code Online (Sandbox Code Playgroud)

我可以使用cp命令从hdfs复制文件,没有任何问题.以下命令成功将hdfs文件夹复制到S3

hadoop fs -cp hdfs://nameserv1/tmp/data/sampledata  s3n://hdfsbackup/
Run Code Online (Sandbox Code Playgroud)

我知道有可用的Amazon S3优化的distcp(s3distcp),但我不想使用它,因为它不支持更新/覆盖选项.

Cha*_*guy 2

您似乎正在使用 Kerberos 安全性,不幸的是,如果启用了 Kerberos,Map/Reduce 作业当前无法访问 Amazon S3。您可以在MAPREDUCE-4548中查看更多详细信息。

\n\n

他们实际上有一个补丁可以修复这个问题,但目前不属于任何 Hadoop 发行版,因此,如果您有机会从源代码修改和构建 Hadoop,您应该这样做:

\n\n
\nIndex: core/org/apache/hadoop/security/SecurityUtil.java\n===================================================================\n--- core/org/apache/hadoop/security/SecurityUtil.java   (r\xc3\xa9vision 1305278)\n+++ core/org/apache/hadoop/security/SecurityUtil.java   (copie de travail)\n@@ -313,6 +313,9 @@\n     if (authority == null || authority.isEmpty()) {\n       return null;\n     }\n+    if (uri.getScheme().equals("s3n") || uri.getScheme().equals("s3")) {\n+      return null;\n+    }\n     InetSocketAddress addr = NetUtils.createSocketAddr(authority, defPort);\n     return buildTokenService(addr).toString();\n    }\n
Run Code Online (Sandbox Code Playgroud)\n\n

该票证上次更新是在几天前,因此希望很快能得到正式修补。

\n\n

一个更简单的解决方案是禁用 Kerberos,但这在您的环境中可能是不可能的。

\n\n

我已经看到,如果您的存储桶被命名为域名,您也许能够执行此操作,但我还没有尝试过,即使这有效,这听起来也像是一种黑客攻击。

\n