当我提前知道它的长度时,我可以加速一个可迭代的 class 吗?

Can I speedup an iterable class when I know it's length in advance?

PEP 424 在 "Rationale" 中提到:

Being able to pre-allocate lists based on the expected size, as estimated by __length_hint__ , can be a significant optimization. CPython has been observed to run some code faster than PyPy, purely because of this optimization being present.

所以我问自己我现在在这里问的问题:是否有可能加快一些 iterable class 处理迭代器(当有可能根据这些知识正确预测它是 "length")?

我关于做两个实验的结论(一个是在收到@TerryJanReedy 的反馈之后):

在具有长迭代的简单情况下可以有 显着(高达 50%)优化,但在绝对性能中,一旦使用或执行一些更复杂的操作,它就可以忽略不计项目或可迭代的内容非常短。

设置

我实现了一个 class 只是迭代一些迭代器和另一个 map-like 将函数应用于每个项目。两种 classes 都有两种变体,一种没有实现 __length_hint__,一种带有它。

我选择 Cython 以尽可能减少 Python 开销:

from operator import length_hint

cdef class MyIter(object):
    cdef object it

    def __init__(self, iterable):
        self.it = iter(iterable)

    def __iter__(self):
        return self

    def __next__(self):
        return next(self.it)

cdef class MyIter2(object):
    cdef object it

    def __init__(self, iterable):
        self.it = iter(iterable)

    def __iter__(self):
        return self

    def __next__(self):
        return next(self.it)

    # --- This method is new ---
    def __length_hint__(self):
        return length_hint(self.it)

# Map-like classes

cdef class MyMap(object):
    cdef object func
    cdef object it

    def __init__(self, func, iterable):
        self.it = iter(iterable)
        self.func = func

    def __iter__(self):
        return self

    def __next__(self):
        return self.func(next(self.it))

cdef class MyMap2(object):
    cdef object func
    cdef object it

    def __init__(self, func, iterable):
        self.it = iter(iterable)
        self.func = func

    def __iter__(self):
        return self

    def __next__(self):
        return self.func(next(self.it))

    # --- This method is new ---
    def __length_hint__(self):
        return length_hint(self.it)

时间

我使用 Python 3.5 和 Ipythons %timeit 命令进行了计时:

import random

lengths1 = []
timing1 = []
timing2 = []

lengths2 = []
timing3 = []
timing4 = []

for _ in range(30):
    i = random.randint(1, 1000000)
    lengths1.append(i)
    lst = list(range(i))

    res1 = %timeit -o list(MyIter(lst))
    timing1.append(res1)
    res2 = %timeit -o list(MyIter2(lst))
    timing2.append(res2)

    i = random.randint(1, 100000)  # factor 10 less items
    lengths2.append(i)
    lst = list(range(i))

    res3 = %timeit -o list(MyMap(float, lst))
    timing3.append(res3)
    res4 = %timeit -o list(MyMap2(float, lst))
    timing4.append(res4)

时间差(timing1 - timing2)和相对时间差(100 * (timing1 - timing2) / timing1)的结果:

MyIter

这显示了长迭代的显着优化(高达 50%)。

我的地图

所以带有 __length_hint__ 的那个有时更快,但不是我所说的 significant

抛开 generator/iterator 术语混乱,__length_hint__ 方法是一个非常小的优化,我只会在特殊情况下使用。我自己写了一个简单的小测试:

class Range:

    def __init__(self, n):
        self._n = n
        self._i = 0

    def __iter__(self):
        return self

    def __next__(self):
        i = self._i
        if i >= self._n:
            raise StopIteration
        self._i += 1
        return i

class RangeWithHint(Range):

    def __length_hint__(self):
        return self._n

如果这用于生成值列表,则预分配列表的优势仅在具有大约一百万个元素的非常大的列表中变得可衡量,即使这样也非常小:

Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> timeit("xs = list(Range(1000000))", "from __main__ import Range", number=10)
5.068971888250076
>>> timeit("xs = list(RangeWithHint(1000000))", "from __main__ import RangeWithHint", number=10)
4.7962311912107225

要点:Python 在列表增长时重新分配列表已经非常非常快。不要假设 __length_hint__ 会大大提高速度。