python 中多个函数的有序 reduce
ordered reduce for multiple functions in python
有序列表减少
我需要减少一些列表,其中根据元素类型,二元运算的速度和实现会有所不同,即可以通过首先减少一些具有特定功能的对来获得较大的速度降低。
例如foo(a[0], bar(a[1], a[2]))
可能比 bar(foo(a[0], a[1]), a[2])
慢很多,但在这种情况下给出相同的结果。
我已经有了以元组列表的形式生成最佳排序的代码 (pair_index, binary_function)
。我正在努力实现一个有效的函数来执行减少,理想情况下 return 是一个新的部分函数,然后可以在相同类型排序但值不同的列表上重复使用。
简单而缓慢(?)的解决方案
这是我天真的解决方案,涉及 for 循环、删除元素和关闭 (pair_index, binary_function)
列表到 return 一个 'precomputed' 函数。
def ordered_reduce(a, pair_indexes, binary_functions, precompute=False):
"""
a: list to reduce, length n
pair_indexes: order of pairs to reduce, length (n-1)
binary_functions: functions to use for each reduction, length (n-1)
"""
def ord_red_func(x):
y = list(x) # copy so as not to eat up
for p, f in zip(pair_indexes, binary_functions):
b = f(y[p], y[p+1])
# Replace pair
del y[p]
y[p] = b
return y[0]
return ord_red_func if precompute else ord_red_func(a)
>>> foos = (lambda a, b: a - b, lambda a, b: a + b, lambda a, b: a * b)
>>> ordered_reduce([1, 2, 3, 4], (2, 1, 0), foos)
1
>>> 1 * (2 + (3-4))
1
以及预计算的工作原理:
>>> foo = ordered_reduce(None, (0, 1, 0), foos)
>>> foo([1, 2, 3, 4])
-7
>>> (1 - 2) * (3 + 4)
-7
然而,它涉及复制整个列表,而且(因此?)速度也很慢。有 better/standard 的方法吗?
(编辑:)一些时间:
from operators import add
from functools import reduce
from itertools import repeat
from random import random
r = 100000
xs = [random() for _ in range(r)]
# slightly trivial choices of pairs and functions, to replicate reduce
ps = [0]*(r-1)
fs = repeat(add)
foo = ordered_reduce(None, ps, fs, precompute=True)
>>> %timeit reduce(add, xs)
100 loops, best of 3: 3.59 ms per loop
>>> %timeit foo(xs)
1 loop, best of 3: 1.44 s per loop
这是一种最坏的情况,由于 reduce 不接受函数的可迭代,所以有点作弊,但是一个函数这样做(但没有顺序)仍然非常快:
def multi_reduce(fs, xs):
xs = iter(xs)
x = next(xs)
for f, nx in zip(fs, xs):
x = f(x, nx)
return x
>>> %timeit multi_reduce(fs, xs)
100 loops, best of 3: 8.71 ms per loop
(EDIT2):为了好玩,大量作弊 'compiled' 版本的性能,它给出了总开销发生的一些想法。
from numba import jit
@jit(nopython=True)
def numba_sum(xs):
y = 0
for x in xs:
y += x
return y
>>> %timeit numba_sum(xs)
1000 loops, best of 3: 1.46 ms per loop
看到这道题,我立马想到了reverse Polish notation(RPN)。虽然这可能不是最好的方法,但在这种情况下它仍然可以显着加快速度。
我的第二个想法是,如果您只是适当地重新排序序列 xs
以摆脱 del y[p]
,您可能会得到相同的结果。 (可以说,如果整个 reduce 过程是用 C 语言编写的,则可以实现最佳性能。但这是另一回事。)
逆波兰表示法
如果您不熟悉 RPN,请阅读维基百科文章中的简短说明。基本上所有的操作都可以不带括号写下来,比如(1-2)*(3+4)
在RPN中是1 2 - 3 4 + *
,而1-(2*(3+4))
变成了1 2 3 4 + * -
.
这是 RPN 解析器的简单实现。我将一个对象列表从一个RPN序列中分离出来,这样同一个序列可以直接用于不同的列表。
def rpn(arr, seq):
'''
Reverse Polish Notation algorithm
(this version works only for binary operators)
arr: array of objects
seq: rpn sequence containing indices of objects from arr and functions
'''
stack = []
for x in seq:
if isinstance(x, int):
# it's an object: push it to stack
stack.append(arr[x])
else:
# it's a function: pop two objects, apply the function, push the result to stack
b = stack.pop()
#a = stack.pop()
#stack.append(x(a,b))
## shortcut:
stack[-1] = x(stack[-1], b)
return stack.pop()
用法示例:
# Say we have an array
arr = [100, 210, 42, 13]
# and want to calculate
(100 - 210) * (42 + 13)
# It translates to RPN:
100 210 - 42 13 + *
# or
arr[0] arr[1] - arr[2] arr[3] + *
# So we apply `
rpn(arr,[0, 1, subtract, 2, 3, add, multiply])
要将 RPN 应用于您的案例,您需要从头开始生成 rpn 序列或将您的 (pair_indexes, binary_functions)
转换成它们。我还没有想过转换器,但肯定可以做到。
测试
您的原始测试排在第一位:
r = 100000
xs = [random() for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add)) for x in (i,f)]
rpn_seq2 = list(range(r)) + list(repeat(add,r-1))
# Here rpn_seq denotes (_ + (_ + (_ +( ... )...))))
# and rpn_seq2 denotes ((...( ... _)+ _) + _).
# Obviously, they are not equivalent but with 'add' they yield the same result.
%timeit reduce(add, xs)
100 loops, best of 3: 7.37 ms per loop
%timeit foo(xs)
1 loops, best of 3: 1.71 s per loop
%timeit rpn(xs, rpn_seq)
10 loops, best of 3: 79.5 ms per loop
%timeit rpn(xs, rpn_seq2)
10 loops, best of 3: 73 ms per loop
# Pure numpy just out of curiosity:
%timeit np.sum(np.asarray(xs))
100 loops, best of 3: 3.84 ms per loop
xs_np = np.asarray(xs)
%timeit np.sum(xs_np)
The slowest run took 4.52 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 48.5 µs per loop
因此,rpn
比 reduce
慢 10 倍,但比 ordered_reduce
快 20 倍。
现在,让我们尝试一些更复杂的事情:交替地对矩阵进行加法和乘法运算。我需要一个特殊的函数来测试 reduce
.
add_or_dot_b = 1
def add_or_dot(x,y):
'''calls 'add' and 'np.dot' alternately'''
global add_or_dot_b
if add_or_dot_b:
out = x+y
else:
out = np.dot(x,y)
add_or_dot_b = 1 - add_or_dot_b
# normalizing out to avoid `inf` in results
return out/np.max(out)
r = 100001 # +1 for convenience
# (we apply an even number of functions)
xs = [np.random.rand(2,2) for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add_or_dot)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add_or_dot)) for x in (i,f)]
%timeit reduce(add_or_dot, xs)
1 loops, best of 3: 894 ms per loop
%timeit foo(xs)
1 loops, best of 3: 2.72 s per loop
%timeit rpn(xs, rpn_seq)
1 loops, best of 3: 1.17 s per loop
在这里,rpn
大约比 reduce
慢 25%,比 ordered_reduce
快 2 倍多。
有序列表减少
我需要减少一些列表,其中根据元素类型,二元运算的速度和实现会有所不同,即可以通过首先减少一些具有特定功能的对来获得较大的速度降低。
例如foo(a[0], bar(a[1], a[2]))
可能比 bar(foo(a[0], a[1]), a[2])
慢很多,但在这种情况下给出相同的结果。
我已经有了以元组列表的形式生成最佳排序的代码 (pair_index, binary_function)
。我正在努力实现一个有效的函数来执行减少,理想情况下 return 是一个新的部分函数,然后可以在相同类型排序但值不同的列表上重复使用。
简单而缓慢(?)的解决方案
这是我天真的解决方案,涉及 for 循环、删除元素和关闭 (pair_index, binary_function)
列表到 return 一个 'precomputed' 函数。
def ordered_reduce(a, pair_indexes, binary_functions, precompute=False):
"""
a: list to reduce, length n
pair_indexes: order of pairs to reduce, length (n-1)
binary_functions: functions to use for each reduction, length (n-1)
"""
def ord_red_func(x):
y = list(x) # copy so as not to eat up
for p, f in zip(pair_indexes, binary_functions):
b = f(y[p], y[p+1])
# Replace pair
del y[p]
y[p] = b
return y[0]
return ord_red_func if precompute else ord_red_func(a)
>>> foos = (lambda a, b: a - b, lambda a, b: a + b, lambda a, b: a * b)
>>> ordered_reduce([1, 2, 3, 4], (2, 1, 0), foos)
1
>>> 1 * (2 + (3-4))
1
以及预计算的工作原理:
>>> foo = ordered_reduce(None, (0, 1, 0), foos)
>>> foo([1, 2, 3, 4])
-7
>>> (1 - 2) * (3 + 4)
-7
然而,它涉及复制整个列表,而且(因此?)速度也很慢。有 better/standard 的方法吗?
(编辑:)一些时间:
from operators import add
from functools import reduce
from itertools import repeat
from random import random
r = 100000
xs = [random() for _ in range(r)]
# slightly trivial choices of pairs and functions, to replicate reduce
ps = [0]*(r-1)
fs = repeat(add)
foo = ordered_reduce(None, ps, fs, precompute=True)
>>> %timeit reduce(add, xs)
100 loops, best of 3: 3.59 ms per loop
>>> %timeit foo(xs)
1 loop, best of 3: 1.44 s per loop
这是一种最坏的情况,由于 reduce 不接受函数的可迭代,所以有点作弊,但是一个函数这样做(但没有顺序)仍然非常快:
def multi_reduce(fs, xs):
xs = iter(xs)
x = next(xs)
for f, nx in zip(fs, xs):
x = f(x, nx)
return x
>>> %timeit multi_reduce(fs, xs)
100 loops, best of 3: 8.71 ms per loop
(EDIT2):为了好玩,大量作弊 'compiled' 版本的性能,它给出了总开销发生的一些想法。
from numba import jit
@jit(nopython=True)
def numba_sum(xs):
y = 0
for x in xs:
y += x
return y
>>> %timeit numba_sum(xs)
1000 loops, best of 3: 1.46 ms per loop
看到这道题,我立马想到了reverse Polish notation(RPN)。虽然这可能不是最好的方法,但在这种情况下它仍然可以显着加快速度。
我的第二个想法是,如果您只是适当地重新排序序列 xs
以摆脱 del y[p]
,您可能会得到相同的结果。 (可以说,如果整个 reduce 过程是用 C 语言编写的,则可以实现最佳性能。但这是另一回事。)
逆波兰表示法
如果您不熟悉 RPN,请阅读维基百科文章中的简短说明。基本上所有的操作都可以不带括号写下来,比如(1-2)*(3+4)
在RPN中是1 2 - 3 4 + *
,而1-(2*(3+4))
变成了1 2 3 4 + * -
.
这是 RPN 解析器的简单实现。我将一个对象列表从一个RPN序列中分离出来,这样同一个序列可以直接用于不同的列表。
def rpn(arr, seq):
'''
Reverse Polish Notation algorithm
(this version works only for binary operators)
arr: array of objects
seq: rpn sequence containing indices of objects from arr and functions
'''
stack = []
for x in seq:
if isinstance(x, int):
# it's an object: push it to stack
stack.append(arr[x])
else:
# it's a function: pop two objects, apply the function, push the result to stack
b = stack.pop()
#a = stack.pop()
#stack.append(x(a,b))
## shortcut:
stack[-1] = x(stack[-1], b)
return stack.pop()
用法示例:
# Say we have an array
arr = [100, 210, 42, 13]
# and want to calculate
(100 - 210) * (42 + 13)
# It translates to RPN:
100 210 - 42 13 + *
# or
arr[0] arr[1] - arr[2] arr[3] + *
# So we apply `
rpn(arr,[0, 1, subtract, 2, 3, add, multiply])
要将 RPN 应用于您的案例,您需要从头开始生成 rpn 序列或将您的 (pair_indexes, binary_functions)
转换成它们。我还没有想过转换器,但肯定可以做到。
测试
您的原始测试排在第一位:
r = 100000
xs = [random() for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add)) for x in (i,f)]
rpn_seq2 = list(range(r)) + list(repeat(add,r-1))
# Here rpn_seq denotes (_ + (_ + (_ +( ... )...))))
# and rpn_seq2 denotes ((...( ... _)+ _) + _).
# Obviously, they are not equivalent but with 'add' they yield the same result.
%timeit reduce(add, xs)
100 loops, best of 3: 7.37 ms per loop
%timeit foo(xs)
1 loops, best of 3: 1.71 s per loop
%timeit rpn(xs, rpn_seq)
10 loops, best of 3: 79.5 ms per loop
%timeit rpn(xs, rpn_seq2)
10 loops, best of 3: 73 ms per loop
# Pure numpy just out of curiosity:
%timeit np.sum(np.asarray(xs))
100 loops, best of 3: 3.84 ms per loop
xs_np = np.asarray(xs)
%timeit np.sum(xs_np)
The slowest run took 4.52 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 48.5 µs per loop
因此,rpn
比 reduce
慢 10 倍,但比 ordered_reduce
快 20 倍。
现在,让我们尝试一些更复杂的事情:交替地对矩阵进行加法和乘法运算。我需要一个特殊的函数来测试 reduce
.
add_or_dot_b = 1
def add_or_dot(x,y):
'''calls 'add' and 'np.dot' alternately'''
global add_or_dot_b
if add_or_dot_b:
out = x+y
else:
out = np.dot(x,y)
add_or_dot_b = 1 - add_or_dot_b
# normalizing out to avoid `inf` in results
return out/np.max(out)
r = 100001 # +1 for convenience
# (we apply an even number of functions)
xs = [np.random.rand(2,2) for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add_or_dot)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add_or_dot)) for x in (i,f)]
%timeit reduce(add_or_dot, xs)
1 loops, best of 3: 894 ms per loop
%timeit foo(xs)
1 loops, best of 3: 2.72 s per loop
%timeit rpn(xs, rpn_seq)
1 loops, best of 3: 1.17 s per loop
在这里,rpn
大约比 reduce
慢 25%,比 ordered_reduce
快 2 倍多。