ero*_*ond 2 java hadoop amazon-s3 amazon-web-services tomcat7
我创建一个名为"my-role"的AWS IAM角色,将EC2指定为可信实体,即使用信任关系策略文档:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Run Code Online (Sandbox Code Playgroud)
该角色具有以下策略:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketLocation",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketPolicy",
"s3:GetBucketRequestPayment",
"s3:GetBucketTagging",
"s3:GetBucketVersioning",
"s3:GetBucketWebsite",
"s3:GetLifecycleConfiguration",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:RestoreObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
Run Code Online (Sandbox Code Playgroud)
我使用AWS CLI从命令行启动EC2实例(Amazon Linux 2014.09.1),将"my-role"指定为实例配置文件,一切正常.我通过运行以下命令验证实例是否有效地假定为"my-role":
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/查询实例元数据,从中我得到响应my-role;curl http://169.254.169.254/latest/meta-data/iam/security-credentials/my-role 我从中获得与"我的角色"相关的临时凭证.此类凭据检索响应的示例如下:
{
"Code" : "Success",
"LastUpdated" : "2015-01-19T10:37:35Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "an-access-key-id",
"SecretAccessKey" : "a-secret-access-key",
"Token" : "a-token",
"Expiration" : "2015-01-19T16:47:09Z"
}
Run Code Online (Sandbox Code Playgroud)
aws s3 ls s3://my-bucket/从中我正确地得到一个包含"my-bucket"下第一个子目录的列表.(启动此AMI时,默认情况下会安装和配置AWS CLI.EC2实例和S3存储桶位于同一AWS账户中)我在这样的实例上运行/安装Tomcat7服务器和容器,我在其上部署了一个没有问题的J2EE 1.7 servlet.
这样的servlet应该在本地文件系统上从S3存储桶下载文件,特别是s3://my-bucket/custom-path/file.tar.gz使用Hadoop Java API.(请注意,我尝试过hadoop-common artifact 2.4.x,2.5.x,2.6.x,没有任何正面结果.我将发布在使用2.5.x时我得到的异常以下)
在servlet中,我从上面提到的实例元数据URL中检索新的凭据,并使用它们来配置我的Hadoop Java API实例:
...
Path path = new Path("s3n://my-bucket/");
Configuration conf = new Configuration();
conf.set("fs.defaultFS", path.toString());
conf.set("fs.s3n.awsAccessKeyId", myAwsAccessKeyId);
conf.set("fs.s3n.awsSecretAccessKey", myAwsSecretAccessKey);
conf.set("fs.s3n.awsSessionToken", mySessionToken);
...
Run Code Online (Sandbox Code Playgroud)
显然,myAwsAccessKeyId,myAwsSecretAccessKey,和mySessionToken是我以前与实际值设置Java变量.然后,我有效地获取了一个FileSystem实例,使用:
FileSystem fs = path.getFileSystem(conf);
Run Code Online (Sandbox Code Playgroud)
我能够检索与FileSystem相关的所有配置(fs.getconf().get(key-name))并验证所有配置都是假定的.
我无法下载s3://my-bucket/custom-path/file.tar.gz使用:
...
fs.copyToLocalFile(false, new Path(path.toString()+"custom-path/file.tar.gz"), outputLocalPath);
...
Run Code Online (Sandbox Code Playgroud)
如果我使用hadoop-common 2.5.x我得到IOException:
org.apache.hadoop.security.AccessControlException:权限被拒绝:s3n://my-bucket/custom-path/file.tar.gz at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:449 )org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)org.apache.hadoop. fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:181)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke( DelegatingMethodAccessorImpl.java:43)在java.lang上.反映.Method.invoke(Method.java:606)org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler. java:102)org.apache.hadoop.fs.s3native.$ Proxy12.retrieveMetadata(未知来源)位于org.apache.hadoop的org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:467) .fs.FileUtil.copy(FileUtil.java:337)org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1968) )org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1937)...187)org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)atg.apache.hadoop.fs.s3native.$ Proxy12.retrieveMetadata(Unknown Source)at org.apache.hadoop.fs位于org.apache.hadoop.fs.FileUtil.copy的org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)中的.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:467)(FileUtil.java:289 )org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1968)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1937)...187)org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)atg.apache.hadoop.fs.s3native.$ Proxy12.retrieveMetadata(Unknown Source)at org.apache.hadoop.fs位于org.apache.hadoop.fs.FileUtil.copy的org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)中的.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:467)(FileUtil.java:289 )org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1968)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1937)...hadoop.fs.FileUtil.copy(FileUtil.java:337)org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java: 1968)在org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1937)...hadoop.fs.FileUtil.copy(FileUtil.java:337)org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java: 1968)在org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1937)...
如果我使用hadoop-common 2.4.x,我得到一个NullPointerException:
位于org.apache的org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:433)的java.lang.NullPointerException位于org.apache的org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337). hadoop.fs.FileUtil.copy(FileUtil.java:289)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1968)org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java: 1937)...
只是为了记录,如果不设置任何aws凭证,我得到:
必须将AWS Access Key ID和Secret Access Key指定为s3n URL的用户名或密码,或者分别设置fs.s3n.awsAccessKeyId或fs.s3n.awsSecretAccessKey属性.
<hadoop-dir>/bin/hadoop fs -cp s3n://<aws-access-key-id>:<aws-secret-access-key>@my-bucket/custom-path/file.tar.gz .我再次获得NPE:位于org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)的org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:479)中的致命内部错误java.lang.NullPointerException org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)org.apache.hadoop.fs.shell.Ls.processPathArgument(Ls.java:96)org.apache.hadoop.fs上的org.apache.hadoop.fs.shell.Command.recursePath .shell.Command.processArgument(Command.java:260)org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)atg.apache.hadoop.fs.shell.Command.processRawArguments(Command .java:190)在org.apache的org.apache.hadoop.fs.FsShell.run(FsShell.java:255)的org.apache.hadoop.fs.shell.Command.run(Command.java:154). oro.apache.hadoop.util.ToolRunner中的hadoop.util.ToolRunner.run(ToolRunner.java:70).在org.apache.hadoop.fs.FsShell.main上运行(ToolRunner.java:84)(FsShell.java:308)
对不起,很长的帖子,我只是想尽可能详细.感谢您在这里获得最终帮助.
您正在使用STS /临时AWS凭证; 这些似乎当前不支持hadoop中的s3或s3n FileSystem实现.
AWS STS /临时凭证不仅包括(访问密钥,密钥),还包括会话令牌.hadoop s3和s3n FileSystem(s)还不支持包含会话令牌(即你的配置fs.s3n.awsSessionToken不受支持和忽略s3n FileSystem.
来自AmazonS3 - Hadoop Wiki ...
(注意没有提及fs.s3.awsSessionToken):
配置使用s3/s3n文件系统
编辑
core-site.xml文件以包含S3密钥Run Code Online (Sandbox Code Playgroud)<property> <name>fs.s3.awsAccessKeyId</name> <value>ID</value> </property> <property> <name>fs.s3.awsSecretAccessKey</name> <value>SECRET</value> </property>
如果您从github.com上的apache/hadoop查看S3Credentials.java,您会注意到S3凭据的表示完全没有会话令牌的概念.
提交了一个补丁来解决这个限制(详见此处); 但是,它尚未整合.
s3a FileSystemHadoop 2.6.0中添加的新功能进行探索.它声称支持基于IAM角色的身份验证(即您根本不必明确指定密钥).
Hadoop JIRA票证描述了如何配置s3a FileSystem:
来自https://issues.apache.org/jira/browse/HADOOP-10400:
fs.s3a.access.key- 您的AWS访问密钥ID(省略角色身份验证)
fs.s3a.secret.key- 您的AWS密钥(省略角色身份验证)