Python Boto3 AWS 分段上传语法
Python Boto3 AWS Multipart Upload Syntax
我已成功通过 AWS 进行身份验证,并在 Bucket 对象上使用 'put_object' 方法上传文件。现在我想使用 multipart API 来完成对大文件的处理。我在这个问题中找到了公认的答案:
但是在尝试实施时我遇到了 "unknown method" 错误。我究竟做错了什么?我的代码如下。谢谢!
## Get an AWS Session
self.awsSession = Session(aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=session_token,
region_name=region_type)
...
# Upload the file to S3
s3 = self.awsSession.resource('s3')
s3.Bucket('prodbucket').put_object(Key=fileToUpload, Body=data) # WORKS
#s3.Bucket('prodbucket').upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
#s3.upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
upload_file 方法尚未移植到存储桶资源。现在您需要直接使用客户端对象来执行此操作:
client = self.awsSession.client('s3')
client.upload_file(...)
Libcloud S3 wrapper透明地为您处理所有的拆分和上传
使用upload_object_via_stream方法:
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
# Path to a very large file you want to upload
FILE_PATH = '/home/user/myfile.tar.gz'
cls = get_driver(Provider.S3)
driver = cls('api key', 'api secret key')
container = driver.get_container(container_name='my-backups-12345')
# This method blocks until all the parts have been uploaded.
extra = {'content_type': 'application/octet-stream'}
with open(FILE_PATH, 'rb') as iterator:
obj = driver.upload_object_via_stream(iterator=iterator,
container=container,
object_name='backup.tar.gz',
extra=extra)
有关 S3 Multipart 功能的官方文档,请参阅 AWS Official Blog。
我已成功通过 AWS 进行身份验证,并在 Bucket 对象上使用 'put_object' 方法上传文件。现在我想使用 multipart API 来完成对大文件的处理。我在这个问题中找到了公认的答案:
但是在尝试实施时我遇到了 "unknown method" 错误。我究竟做错了什么?我的代码如下。谢谢!
## Get an AWS Session
self.awsSession = Session(aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=session_token,
region_name=region_type)
...
# Upload the file to S3
s3 = self.awsSession.resource('s3')
s3.Bucket('prodbucket').put_object(Key=fileToUpload, Body=data) # WORKS
#s3.Bucket('prodbucket').upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
#s3.upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
upload_file 方法尚未移植到存储桶资源。现在您需要直接使用客户端对象来执行此操作:
client = self.awsSession.client('s3')
client.upload_file(...)
Libcloud S3 wrapper透明地为您处理所有的拆分和上传
使用upload_object_via_stream方法:
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
# Path to a very large file you want to upload
FILE_PATH = '/home/user/myfile.tar.gz'
cls = get_driver(Provider.S3)
driver = cls('api key', 'api secret key')
container = driver.get_container(container_name='my-backups-12345')
# This method blocks until all the parts have been uploaded.
extra = {'content_type': 'application/octet-stream'}
with open(FILE_PATH, 'rb') as iterator:
obj = driver.upload_object_via_stream(iterator=iterator,
container=container,
object_name='backup.tar.gz',
extra=extra)
有关 S3 Multipart 功能的官方文档,请参阅 AWS Official Blog。