如何并行执行两个请求
How to do two requests in parallel
我有以下代码,它从亚马逊的 API 请求一些东西:
params = {'Operation': 'GetRequesterStatistic', 'Statistic': 'NumberHITsAssignable', 'TimePeriod': 'LifeToDate'}
response = self.conn.make_request(action=None, params=params, path='/', verb='GET')
data['ActiveHITs'] = self.conn._process_response(response).LongValue
params = {'Operation': 'GetRequesterStatistic', 'Statistic': 'NumberAssignmentsPending', 'TimePeriod': 'LifeToDate'}
response = self.conn.make_request(action=None, params=params, path='/', verb='GET')
data['PendingAssignments'] = self.conn._process_response(response).LongValue
这些请求中的每一个都需要大约 1s 等待亚马逊 return 数据。我如何 运行 这两个并行,所以它(理想情况下)需要 1s 到 运行,而不是 2s?
您可以使用 multiprocessing.Pool
来并行化请求:
from multiprocessing import Pool
class Foo:
def __fetch(self, statistic):
params = {
'Operation': 'GetRequesterStatistic',
'Statistic': statistic,
'TimePeriod': 'LifeToDate'
}
response = self.conn.make_request(
action=None, params=params, path='/', verb='GET'
)
return self.conn._process_response(response).LongValue
def get_stats(self):
pool = Pool()
results = pool.map(self.__fetch, [
'NumberHITsAssignable', 'NumberAssignmentsPending'
])
data['ActiveHITs'], data['PendingAssignments'] = results
这具有能够并行化任何给定数量的请求的良好效果。默认情况下,每个核心创建一个工作人员,您可以通过将参数传递给 Pool
.
来更改数量
我有以下代码,它从亚马逊的 API 请求一些东西:
params = {'Operation': 'GetRequesterStatistic', 'Statistic': 'NumberHITsAssignable', 'TimePeriod': 'LifeToDate'}
response = self.conn.make_request(action=None, params=params, path='/', verb='GET')
data['ActiveHITs'] = self.conn._process_response(response).LongValue
params = {'Operation': 'GetRequesterStatistic', 'Statistic': 'NumberAssignmentsPending', 'TimePeriod': 'LifeToDate'}
response = self.conn.make_request(action=None, params=params, path='/', verb='GET')
data['PendingAssignments'] = self.conn._process_response(response).LongValue
这些请求中的每一个都需要大约 1s 等待亚马逊 return 数据。我如何 运行 这两个并行,所以它(理想情况下)需要 1s 到 运行,而不是 2s?
您可以使用 multiprocessing.Pool
来并行化请求:
from multiprocessing import Pool
class Foo:
def __fetch(self, statistic):
params = {
'Operation': 'GetRequesterStatistic',
'Statistic': statistic,
'TimePeriod': 'LifeToDate'
}
response = self.conn.make_request(
action=None, params=params, path='/', verb='GET'
)
return self.conn._process_response(response).LongValue
def get_stats(self):
pool = Pool()
results = pool.map(self.__fetch, [
'NumberHITsAssignable', 'NumberAssignmentsPending'
])
data['ActiveHITs'], data['PendingAssignments'] = results
这具有能够并行化任何给定数量的请求的良好效果。默认情况下,每个核心创建一个工作人员,您可以通过将参数传递给 Pool
.