如何使用ansible递归上传文件夹到aws s3

ano*_*eek 4 amazon amazon-s3 amazon-web-services ansible ansible-playbook

我正在使用ansible来部署我的应用程序.我已经到了想要将我的grrouted资源上传到新创建的存储桶的地步,这就是我所做的: {{hostvars.localhost.public_bucket}}是存储桶名称, {{client}}/{{version_id}}/assets/admin是包含多级文件夹和要上载的资产的文件夹的路径:

- s3:
    aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
    bucket: "{{hostvars.localhost.public_bucket}}"
    object: "{{client}}/{{version_id}}/assets/admin"
    src: "{{trunk}}/public/assets/admin"
    mode: put
Run Code Online (Sandbox Code Playgroud)

这是错误消息:

   fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n    main()\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n    upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n    key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n    with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}
Run Code Online (Sandbox Code Playgroud)

我浏览了文档,但没有找到递归选项ansible s3_module.这是一个错误还是我错过了什么?!

toa*_*oza 9

从Ansible 2.3开始,您可以使用s3_sync::

- name: basic upload
  s3_sync:
    bucket: tedder
    file_root: roles/s3/files/
Run Code Online (Sandbox Code Playgroud)

注意:如果您使用的是非默认区域,则应该region明确设置,否则您会得到一个有点模糊的错误:An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request

这是一个完整的剧本,与您上面尝试的内容相匹配:

- hosts: localhost
  vars:
    aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"    
    bucket: "{{hostvars.localhost.public_bucket}}"
  tasks:
  - name: Upload files
    s3_sync:
      aws_access_key: '{{aws_access_key}}'
      aws_secret_key: '{{aws_secret_key}}'
      bucket: '{{bucket}}'
      file_root: "{{trunk}}/public/assets/admin"
      key_prefix: "{{client}}/{{version_id}}/assets/admin"
      permission: public-read
      region: eu-central-1
Run Code Online (Sandbox Code Playgroud)

笔记:

  1. 你可以删除区域,我只是添加它来举例说明我的观点
  2. 我刚刚添加了明确的键.您可以(也可能应该)使用环境变量:

来自文档:

如果未在模块中设置参数,则可以按降序优先顺序使用以下环境变量AWS_URL或EC2_URL,AWS_ACCESS_KEY_ID或AWS_ACCESS_KEY或EC2_ACCESS_KEY,AWS_SECRET_ACCESS_KEY或AWS_SECRET_KEY或EC2_SECRET_KEY,AWS_SECURITY_TOKEN或EC2_SECURITY_TOKEN,AWS_REGION或EC2_REGION


Abd*_*ebi 2

通过使用 ansible,看起来您想要幂等的东西,但 ansible 尚不支持 s3 目录上传或任何递归,因此您可能应该使用 aws cli 来完成这样的工作:

command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
Run Code Online (Sandbox Code Playgroud)