如何遍历字典——一次 n 个键值对
How to iterate over a dictionary - n key-value pairs at a time
我有一本包含数千个元素的非常大的字典。我需要用这个字典作为参数执行一个函数。现在,我不想在一次执行中传递整个字典,而是想分批执行该函数 - 一次使用字典的 x 个键值对。
我正在做以下事情:
mydict = ##some large hash
x = ##batch size
def some_func(data):
##do something on data
temp = {}
for key,value in mydict.iteritems():
if len(temp) != 0 and len(temp)%x == 0:
some_func(temp)
temp = {}
temp[key] = value
else:
temp[key] = value
if temp != {}:
some_func(temp)
我觉得这看起来很老套。我想知道是否有 elegant/better 方法可以做到这一点。
我经常使用这个小工具:
import itertools
def chunked(it, size):
it = iter(it)
while True:
p = tuple(itertools.islice(it, size))
if not p:
break
yield p
对于您的用例:
for chunk in chunked(big_dict.iteritems(), batch_size):
func(chunk)
这里有两个根据我之前的回答改编的解决方案。
或者,您可以只从字典中获取 items
的列表,然后从该列表的切片中创建新的 dict
。不过,这并不是最佳选择,因为它会大量复制那本庞大的字典。
def chunks(dictionary, size):
items = dictionary.items()
return (dict(items[i:i+size]) for i in range(0, len(items), size))
或者,您可以使用 itertools
模块的一些函数在循环时生成(生成)新的子词典。这类似于@georg 的回答,只是使用 for
循环。
from itertools import chain, islice
def chunks(dictionary, size):
iterator = dictionary.iteritems()
for first in iterator:
yield dict(chain([first], islice(iterator, size - 1)))
用法示例。对于这两种情况:
mydict = {i+1: chr(i+65) for i in range(26)}
for sub_d in chunks2(mydict, 10):
some_func(sub_d)
def chunked(iterable, n):
"""Break an iterable into lists of a given length::
>>> list(chunked([1, 2, 3, 4, 5, 6, 7], 3))
[[1, 2, 3], [4, 5, 6], [7]]
If the length of ``iterable`` is not evenly divisible by ``n``, the last
returned list will be shorter.
This is useful for splitting up a computation on a large number of keys
into batches, to be pickled and sent off to worker processes. One example
is operations on rows in MySQL, which does not implement server-side
cursors properly and would otherwise load the entire dataset into RAM on
the client.
"""
# Doesn't seem to run into any number-of-args limits.
for group in (list(g) for g in izip_longest(*[iter(iterable)] * n,
fillvalue=_marker)):
if group[-1] is _marker:
# If this is the last group, shuck off the padding:
del group[group.index(_marker):]
yield group
我有一本包含数千个元素的非常大的字典。我需要用这个字典作为参数执行一个函数。现在,我不想在一次执行中传递整个字典,而是想分批执行该函数 - 一次使用字典的 x 个键值对。
我正在做以下事情:
mydict = ##some large hash
x = ##batch size
def some_func(data):
##do something on data
temp = {}
for key,value in mydict.iteritems():
if len(temp) != 0 and len(temp)%x == 0:
some_func(temp)
temp = {}
temp[key] = value
else:
temp[key] = value
if temp != {}:
some_func(temp)
我觉得这看起来很老套。我想知道是否有 elegant/better 方法可以做到这一点。
我经常使用这个小工具:
import itertools
def chunked(it, size):
it = iter(it)
while True:
p = tuple(itertools.islice(it, size))
if not p:
break
yield p
对于您的用例:
for chunk in chunked(big_dict.iteritems(), batch_size):
func(chunk)
这里有两个根据我之前的回答改编的解决方案。
或者,您可以只从字典中获取 items
的列表,然后从该列表的切片中创建新的 dict
。不过,这并不是最佳选择,因为它会大量复制那本庞大的字典。
def chunks(dictionary, size):
items = dictionary.items()
return (dict(items[i:i+size]) for i in range(0, len(items), size))
或者,您可以使用 itertools
模块的一些函数在循环时生成(生成)新的子词典。这类似于@georg 的回答,只是使用 for
循环。
from itertools import chain, islice
def chunks(dictionary, size):
iterator = dictionary.iteritems()
for first in iterator:
yield dict(chain([first], islice(iterator, size - 1)))
用法示例。对于这两种情况:
mydict = {i+1: chr(i+65) for i in range(26)}
for sub_d in chunks2(mydict, 10):
some_func(sub_d)
def chunked(iterable, n):
"""Break an iterable into lists of a given length::
>>> list(chunked([1, 2, 3, 4, 5, 6, 7], 3))
[[1, 2, 3], [4, 5, 6], [7]]
If the length of ``iterable`` is not evenly divisible by ``n``, the last
returned list will be shorter.
This is useful for splitting up a computation on a large number of keys
into batches, to be pickled and sent off to worker processes. One example
is operations on rows in MySQL, which does not implement server-side
cursors properly and would otherwise load the entire dataset into RAM on
the client.
"""
# Doesn't seem to run into any number-of-args limits.
for group in (list(g) for g in izip_longest(*[iter(iterable)] * n,
fillvalue=_marker)):
if group[-1] is _marker:
# If this is the last group, shuck off the padding:
del group[group.index(_marker):]
yield group