使用 Bitbucket Pipeline 将整个 Bitbucket 存储库上传到 S3
Upload entire Bitbucket repo to S3 using Bitbucket Pipeline
我正在使用 Bitbuckets 管道。我希望它将我的回购协议(非常小)的全部内容推送到 S3。我不想把它压缩,推送到 S3,然后解压缩。我只希望它采用我的 Bitbucket 存储库中现有的 file/folder 结构并将其推送到 S3。
yaml 文件和 .py 文件应该是什么样子来完成这个?
这是当前的 yaml 文件:
image: python:3.5.1
pipelines:
branches:
master:
- step:
script:
# - apt-get update # required to install zip
# - apt-get install -y zip # required if you want to zip repository objects
- pip install boto3==1.3.0 # required for s3_upload.py
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is the the bucket key
# html files
- python s3_upload.py my-bucket-name html/index_template.html html/index_template.html # run the deployment script
# Example command line parameters. Replace with your values
#- python s3_upload.py bb-s3-upload SampleApp_Linux.zip SampleApp_Linux # run the deployment script
这是我目前的 python:
from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, bucket_key):
"""
Uploads an artefact to Amazon S3
"""
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
try:
client.put_object(
Body=open(artefact, 'rb'),
Bucket=bucket,
Key=bucket_key
)
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
这需要我将 yaml 文件中 repo 中的每个文件作为另一个命令列出。我只想让它抓取所有内容并将其上传到 S3。
您可以更改为使用 docker https://hub.docker.com/r/abesiyo/s3/
运行良好
bitbucket-pipelines.yml
image: abesiyo/s3
pipelines:
default:
- step:
script:
- s3 --region "us-east-1" rm s3://<bucket name>
- s3 --region "us-east-1" sync . s3://<bucket name>
还请在 bitbucket 管道上设置环境变量
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
我自己想出来了。这是 python 文件,'s3_upload.py'
from __future__ import print_function
import os
import sys
import argparse
import boto3
#import zipfile
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, is_folder, bucket_key):
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
if is_folder == 'true':
for root, dirs, files in os.walk(artefact, topdown=False):
print('Walking it')
for file in files:
#add a check like this if you just want certain file types uploaded
#if file.endswith('.js'):
try:
print(file)
client.upload_file(os.path.join(root, file), bucket, os.path.join(root, file))
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
#else:
# print('Skipping file:' + file)
else:
print('Uploading file ' + artefact)
client.upload_file(artefact, bucket, bucket_key)
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("is_folder", help="True if its the name of a folder")
parser.add_argument("bucket_key", help="Name of file in bucket")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.is_folder, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
这是他们的 bitbucket-pipelines.yml 文件:
---
image: python:3.5.1
pipelines:
branches:
dev:
- step:
script:
- pip install boto3==1.4.1 # required for s3_upload.py
- pip install requests
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is if the artefact is a folder
# the fourth argument is the bucket_key to use
- python s3_emptyBucket.py dev-slz-processor-repo
- python s3_upload.py dev-slz-processor-repo lambda true lambda
- python s3_upload.py dev-slz-processor-repo node_modules true node_modules
- python s3_upload.py dev-slz-processor-repo config.dev.json false config.json
stage:
- step:
script:
- pip install boto3==1.3.0 # required for s3_upload.py
- python s3_emptyBucket.py staging-slz-processor-repo
- python s3_upload.py staging-slz-processor-repo lambda true lambda
- python s3_upload.py staging-slz-processor-repo node_modules true node_modules
- python s3_upload.py staging-slz-processor-repo config.staging.json false config.json
master:
- step:
script:
- pip install boto3==1.3.0 # required for s3_upload.py
- python s3_emptyBucket.py prod-slz-processor-repo
- python s3_upload.py prod-slz-processor-repo lambda true lambda
- python s3_upload.py prod-slz-processor-repo node_modules true node_modules
- python s3_upload.py prod-slz-processor-repo config.prod.json false config.json
以 dev 分支为例,它抓取 "lambda" 文件夹中的所有内容,遍历该文件夹的整个结构,并将找到的每个项目上传到 dev-slz-processor -回购桶
最后,这里有一个小功能 's3_emptyBucket',用于在上传新对象之前从存储桶中删除所有对象:
from __future__ import print_function
import os
import sys
import argparse
import boto3
#import zipfile
from botocore.exceptions import ClientError
def empty_bucket(bucket):
try:
resource = boto3.resource('s3')
except ClientError as err:
print("Failed to create boto3 resource.\n" + str(err))
return False
print("Removing all objects from bucket: " + bucket)
resource.Bucket(bucket).objects.delete()
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket to empty")
args = parser.parse_args()
if not empty_bucket(args.bucket):
sys.exit(1)
if __name__ == "__main__":
main()
为了将静态网站部署到 Amazon S3,我有这个 bitbucket-pipelines.yml 配置文件:
image: attensee/s3_website
pipelines:
default:
- step:
script:
- s3_website push
我正在使用 attensee/s3_website docker 图像,因为它安装了很棒的 s3_website 工具。
s3_website (s3_website.yml) [在 Bitbucket 存储库的根目录中创建此文件] 的配置文件如下所示:
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
s3_bucket: bitbucket-pipelines
site : .
我们必须在环境变量中定义环境变量 S3_ID 和 S3_SECRET,来自位桶设置
感谢https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/
对于解决方案
Atlassian 现在提供 "Pipes" 来简化一些常见任务的配置。还有one for S3 upload。
无需指定不同的图像类型:
image: node:8
pipelines:
branches:
master:
- step:
script:
- pipe: atlassian/aws-s3-deploy:0.2.1
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
S3_BUCKET: "your.bucket.name"
LOCAL_PATH: "dist"
我正在使用 Bitbuckets 管道。我希望它将我的回购协议(非常小)的全部内容推送到 S3。我不想把它压缩,推送到 S3,然后解压缩。我只希望它采用我的 Bitbucket 存储库中现有的 file/folder 结构并将其推送到 S3。
yaml 文件和 .py 文件应该是什么样子来完成这个?
这是当前的 yaml 文件:
image: python:3.5.1
pipelines:
branches:
master:
- step:
script:
# - apt-get update # required to install zip
# - apt-get install -y zip # required if you want to zip repository objects
- pip install boto3==1.3.0 # required for s3_upload.py
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is the the bucket key
# html files
- python s3_upload.py my-bucket-name html/index_template.html html/index_template.html # run the deployment script
# Example command line parameters. Replace with your values
#- python s3_upload.py bb-s3-upload SampleApp_Linux.zip SampleApp_Linux # run the deployment script
这是我目前的 python:
from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, bucket_key):
"""
Uploads an artefact to Amazon S3
"""
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
try:
client.put_object(
Body=open(artefact, 'rb'),
Bucket=bucket,
Key=bucket_key
)
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
这需要我将 yaml 文件中 repo 中的每个文件作为另一个命令列出。我只想让它抓取所有内容并将其上传到 S3。
您可以更改为使用 docker https://hub.docker.com/r/abesiyo/s3/
运行良好
bitbucket-pipelines.yml
image: abesiyo/s3
pipelines:
default:
- step:
script:
- s3 --region "us-east-1" rm s3://<bucket name>
- s3 --region "us-east-1" sync . s3://<bucket name>
还请在 bitbucket 管道上设置环境变量 AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
我自己想出来了。这是 python 文件,'s3_upload.py'
from __future__ import print_function
import os
import sys
import argparse
import boto3
#import zipfile
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, is_folder, bucket_key):
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
if is_folder == 'true':
for root, dirs, files in os.walk(artefact, topdown=False):
print('Walking it')
for file in files:
#add a check like this if you just want certain file types uploaded
#if file.endswith('.js'):
try:
print(file)
client.upload_file(os.path.join(root, file), bucket, os.path.join(root, file))
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
#else:
# print('Skipping file:' + file)
else:
print('Uploading file ' + artefact)
client.upload_file(artefact, bucket, bucket_key)
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("is_folder", help="True if its the name of a folder")
parser.add_argument("bucket_key", help="Name of file in bucket")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.is_folder, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
这是他们的 bitbucket-pipelines.yml 文件:
---
image: python:3.5.1
pipelines:
branches:
dev:
- step:
script:
- pip install boto3==1.4.1 # required for s3_upload.py
- pip install requests
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is if the artefact is a folder
# the fourth argument is the bucket_key to use
- python s3_emptyBucket.py dev-slz-processor-repo
- python s3_upload.py dev-slz-processor-repo lambda true lambda
- python s3_upload.py dev-slz-processor-repo node_modules true node_modules
- python s3_upload.py dev-slz-processor-repo config.dev.json false config.json
stage:
- step:
script:
- pip install boto3==1.3.0 # required for s3_upload.py
- python s3_emptyBucket.py staging-slz-processor-repo
- python s3_upload.py staging-slz-processor-repo lambda true lambda
- python s3_upload.py staging-slz-processor-repo node_modules true node_modules
- python s3_upload.py staging-slz-processor-repo config.staging.json false config.json
master:
- step:
script:
- pip install boto3==1.3.0 # required for s3_upload.py
- python s3_emptyBucket.py prod-slz-processor-repo
- python s3_upload.py prod-slz-processor-repo lambda true lambda
- python s3_upload.py prod-slz-processor-repo node_modules true node_modules
- python s3_upload.py prod-slz-processor-repo config.prod.json false config.json
以 dev 分支为例,它抓取 "lambda" 文件夹中的所有内容,遍历该文件夹的整个结构,并将找到的每个项目上传到 dev-slz-processor -回购桶
最后,这里有一个小功能 's3_emptyBucket',用于在上传新对象之前从存储桶中删除所有对象:
from __future__ import print_function
import os
import sys
import argparse
import boto3
#import zipfile
from botocore.exceptions import ClientError
def empty_bucket(bucket):
try:
resource = boto3.resource('s3')
except ClientError as err:
print("Failed to create boto3 resource.\n" + str(err))
return False
print("Removing all objects from bucket: " + bucket)
resource.Bucket(bucket).objects.delete()
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket to empty")
args = parser.parse_args()
if not empty_bucket(args.bucket):
sys.exit(1)
if __name__ == "__main__":
main()
为了将静态网站部署到 Amazon S3,我有这个 bitbucket-pipelines.yml 配置文件:
image: attensee/s3_website
pipelines:
default:
- step:
script:
- s3_website push
我正在使用 attensee/s3_website docker 图像,因为它安装了很棒的 s3_website 工具。 s3_website (s3_website.yml) [在 Bitbucket 存储库的根目录中创建此文件] 的配置文件如下所示:
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
s3_bucket: bitbucket-pipelines
site : .
我们必须在环境变量中定义环境变量 S3_ID 和 S3_SECRET,来自位桶设置
感谢https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/ 对于解决方案
Atlassian 现在提供 "Pipes" 来简化一些常见任务的配置。还有one for S3 upload。
无需指定不同的图像类型:
image: node:8
pipelines:
branches:
master:
- step:
script:
- pipe: atlassian/aws-s3-deploy:0.2.1
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
S3_BUCKET: "your.bucket.name"
LOCAL_PATH: "dist"