Boto 3 从 S3 存储桶中读取许多小对象比 boto 2 慢得多
Boto 3 is much slower than boto 2 for reading many small objects from an S3 bucket
我注意到 boto3 从 S3 存储桶读取相同对象的时间大约是 boto2 的 3 倍。下面的 Python 脚本说明了这个问题。我的环境是Ubuntu18.04,Python3.7.9,boto 2.49.0,boto3 1.16.63.
该脚本使用 20 个线程从 S3 存储桶中读取 1,000 个对象。使用 boto2 需要 5 - 6 秒,但使用 boto3 需要 18 - 19 秒。
我已经尝试了不同数量的线程。我试过在 boto3 文件传输配置中设置 max_concurrency
。这些东西好像没什么区别。
谁能说说为什么 boto3 这么慢,或者如何让它更快?
#!/usr/bin/python -u
"""
This script compares the performance of boto2 and boto3 for reading 1,000 small objects from an S3 bucket.
You'll need to change the value of BUCKET_NAME to the name of a bucket to which the script has read/write access.
"""
import boto
import boto3
from tempfile import NamedTemporaryFile
from threading import Thread
import time
BUCKET_NAME = 'deleteme-steve'
bucket2 = boto.connect_s3().get_bucket(BUCKET_NAME)
s3_boto3 = boto3.client('s3')
# Create 1,000 test objects in an S3 bucket. Once the objects exist, this code can be commented..
with NamedTemporaryFile(mode='wt') as ntf:
ntf.write('This is a test')
ntf.flush()
for i in range(1000):
s3_boto3.upload_file(ntf.name, BUCKET_NAME, 'test{}'.format(i))
def read2(i):
for j in range(50 * i, 50 * (i + 1)):
k = bucket2.get_key('test{}'.format(j))
with NamedTemporaryFile() as ntf:
k.get_contents_to_file(ntf)
def read3(i):
for j in range(50 * i, 50 * (i + 1)):
with NamedTemporaryFile() as ntf:
s3_boto3.download_fileobj(BUCKET_NAME, 'test{}'.format(j), ntf)
for boto_version in [2, 3]:
threads = []
start_time = time.time()
for i in range(20):
t = Thread(target=read2 if boto_version == 2 else read3, args=(i,))
threads.append(t)
t.start()
for t in threads:
t.join()
print('boto {}: {} seconds'.format(boto_version, time.time() - start_time))
原来boto3的慢是在使用Python 2(不再支持)时出现的,而不是Python 3。使用Python 3时,boto2和boto3在我的测试中速度大致相等。
我注意到 boto3 从 S3 存储桶读取相同对象的时间大约是 boto2 的 3 倍。下面的 Python 脚本说明了这个问题。我的环境是Ubuntu18.04,Python3.7.9,boto 2.49.0,boto3 1.16.63.
该脚本使用 20 个线程从 S3 存储桶中读取 1,000 个对象。使用 boto2 需要 5 - 6 秒,但使用 boto3 需要 18 - 19 秒。
我已经尝试了不同数量的线程。我试过在 boto3 文件传输配置中设置 max_concurrency
。这些东西好像没什么区别。
谁能说说为什么 boto3 这么慢,或者如何让它更快?
#!/usr/bin/python -u
"""
This script compares the performance of boto2 and boto3 for reading 1,000 small objects from an S3 bucket.
You'll need to change the value of BUCKET_NAME to the name of a bucket to which the script has read/write access.
"""
import boto
import boto3
from tempfile import NamedTemporaryFile
from threading import Thread
import time
BUCKET_NAME = 'deleteme-steve'
bucket2 = boto.connect_s3().get_bucket(BUCKET_NAME)
s3_boto3 = boto3.client('s3')
# Create 1,000 test objects in an S3 bucket. Once the objects exist, this code can be commented..
with NamedTemporaryFile(mode='wt') as ntf:
ntf.write('This is a test')
ntf.flush()
for i in range(1000):
s3_boto3.upload_file(ntf.name, BUCKET_NAME, 'test{}'.format(i))
def read2(i):
for j in range(50 * i, 50 * (i + 1)):
k = bucket2.get_key('test{}'.format(j))
with NamedTemporaryFile() as ntf:
k.get_contents_to_file(ntf)
def read3(i):
for j in range(50 * i, 50 * (i + 1)):
with NamedTemporaryFile() as ntf:
s3_boto3.download_fileobj(BUCKET_NAME, 'test{}'.format(j), ntf)
for boto_version in [2, 3]:
threads = []
start_time = time.time()
for i in range(20):
t = Thread(target=read2 if boto_version == 2 else read3, args=(i,))
threads.append(t)
t.start()
for t in threads:
t.join()
print('boto {}: {} seconds'.format(boto_version, time.time() - start_time))
原来boto3的慢是在使用Python 2(不再支持)时出现的,而不是Python 3。使用Python 3时,boto2和boto3在我的测试中速度大致相等。