如何使用ansible递归地将文件夹上传到aws s3
How to upload a folder to aws s3 recursivly using ansible
我正在使用 ansible 来部署我的应用程序。
我已经到了要将我的 grunted 资产上传到新创建的存储桶的地步,这就是我所做的:
{{hostvars.localhost.public_bucket}}
是桶名,
{{client}}/{{version_id}}/assets/admin
是包含要上传的多级文件夹和资产的文件夹的路径:
- s3:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
object: "{{client}}/{{version_id}}/assets/admin"
src: "{{trunk}}/public/assets/admin"
mode: put
错误信息如下:
fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}
我查看了文档,但没有找到 ansible s3_module
的递归选项。
这是一个错误还是我遗漏了什么?!
ansible s3 模块不支持目录上传或任何递归。
对于此任务,我建议使用 s3cmd 检查以下语法。
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
通过使用 ansible,看起来你想要一些幂等的东西,但 ansible 还不支持 s3 目录上传或任何递归,所以你可能应该使用 aws cli 来完成这样的工作:
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
我能够使用 s3 模块通过遍历我要上传的目录列表的输出来完成此操作。如果目录中的文件路径格式为 JSON.
- name: upload things
hosts: localhost
connection: local
tasks:
- name: Get all the files in the directory i want to upload, formatted as a json list
command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])'
args:
chdir: ../../styles/img
register: static_files_cmd
- s3:
bucket: "{{ bucket_name }}"
mode: put
object: "{{ item }}"
src: "../../styles/img/{{ item }}"
permission: "public-read"
with_items: "{{ static_files_cmd.stdout|from_json }}"
从 Ansible 2.3 开始,您可以使用:s3_sync
:
- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/
注意: 如果您使用的是非默认区域,则应明确设置 region
,否则您会收到以下有点模糊的错误: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request
这是与您在上面尝试执行的操作相匹配的完整剧本:
- hosts: localhost
vars:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
tasks:
- name: Upload files
s3_sync:
aws_access_key: '{{aws_access_key}}'
aws_secret_key: '{{aws_secret_key}}'
bucket: '{{bucket}}'
file_root: "{{trunk}}/public/assets/admin"
key_prefix: "{{client}}/{{version_id}}/assets/admin"
permission: public-read
region: eu-central-1
备注:
- 你可能会删除区域,我只是添加它来举例说明我上面的观点
- 我刚刚添加了显式键。您可以(并且可能应该)为此使用环境变量:
来自文档:
If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION
我正在使用 ansible 来部署我的应用程序。
我已经到了要将我的 grunted 资产上传到新创建的存储桶的地步,这就是我所做的:
{{hostvars.localhost.public_bucket}}
是桶名,
{{client}}/{{version_id}}/assets/admin
是包含要上传的多级文件夹和资产的文件夹的路径:
- s3:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
object: "{{client}}/{{version_id}}/assets/admin"
src: "{{trunk}}/public/assets/admin"
mode: put
错误信息如下:
fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}
我查看了文档,但没有找到 ansible s3_module
的递归选项。
这是一个错误还是我遗漏了什么?!
ansible s3 模块不支持目录上传或任何递归。 对于此任务,我建议使用 s3cmd 检查以下语法。
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
通过使用 ansible,看起来你想要一些幂等的东西,但 ansible 还不支持 s3 目录上传或任何递归,所以你可能应该使用 aws cli 来完成这样的工作:
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
我能够使用 s3 模块通过遍历我要上传的目录列表的输出来完成此操作。如果目录中的文件路径格式为 JSON.
- name: upload things
hosts: localhost
connection: local
tasks:
- name: Get all the files in the directory i want to upload, formatted as a json list
command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])'
args:
chdir: ../../styles/img
register: static_files_cmd
- s3:
bucket: "{{ bucket_name }}"
mode: put
object: "{{ item }}"
src: "../../styles/img/{{ item }}"
permission: "public-read"
with_items: "{{ static_files_cmd.stdout|from_json }}"
从 Ansible 2.3 开始,您可以使用:s3_sync
:
- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/
注意: 如果您使用的是非默认区域,则应明确设置 region
,否则您会收到以下有点模糊的错误: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request
这是与您在上面尝试执行的操作相匹配的完整剧本:
- hosts: localhost
vars:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
tasks:
- name: Upload files
s3_sync:
aws_access_key: '{{aws_access_key}}'
aws_secret_key: '{{aws_secret_key}}'
bucket: '{{bucket}}'
file_root: "{{trunk}}/public/assets/admin"
key_prefix: "{{client}}/{{version_id}}/assets/admin"
permission: public-read
region: eu-central-1
备注:
- 你可能会删除区域,我只是添加它来举例说明我上面的观点
- 我刚刚添加了显式键。您可以(并且可能应该)为此使用环境变量:
来自文档:
If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION