pandas.DataFrame 的矢量化积分
Vectorize integration of pandas.DataFrame
我有一个 DataFrame
的力-位移数据。位移数组已设置为DataFrame
索引,列是我针对不同测试的各种力曲线。
如何计算完成的工作(即 "the area under the curve")?
我查看了 numpy.trapz
,它似乎满足了我的需要,但我认为我可以避免像这样循环遍历每一列:
import numpy as np
import pandas as pd
forces = pd.read_csv(...)
work_done = {}
for col in forces.columns:
work_done[col] = np.trapz(forces.loc[col], forces.index))
我希望创建一个新的 DataFrame
曲线下区域而不是 dict
,并认为 DataFrame.apply()
或其他可能合适但不知道从哪里开始寻找。
简而言之:
- 我可以避免循环吗?
- 我可以创建一个
DataFrame
的工作直接完成吗?
在此先感谢您的帮助。
您可以通过将整个 DataFrame
传递给 np.trapz
并指定 axis=
参数来对其进行向量化,例如:
import numpy as np
import pandas as pd
# some random input data
gen = np.random.RandomState(0)
x = gen.randn(100, 10)
names = [chr(97 + i) for i in range(10)]
forces = pd.DataFrame(x, columns=names)
# vectorized version
wrk = np.trapz(forces, x=forces.index, axis=0)
work_done = pd.DataFrame(wrk[None, :], columns=forces.columns)
# non-vectorized version for comparison
work_done2 = {}
for col in forces.columns:
work_done2.update({col:np.trapz(forces.loc[:, col], forces.index)})
这些给出以下输出:
from pprint import pprint
pprint(work_done.T)
# 0
# a -24.331560
# b -10.347663
# c 4.662212
# d -12.536040
# e -10.276861
# f 3.406740
# g -3.712674
# h -9.508454
# i -1.044931
# j 15.165782
pprint(work_done2)
# {'a': -24.331559643023006,
# 'b': -10.347663159421426,
# 'c': 4.6622123535050459,
# 'd': -12.536039649161403,
# 'e': -10.276861220217308,
# 'f': 3.4067399176289994,
# 'g': -3.7126739591045541,
# 'h': -9.5084536839888187,
# 'i': -1.0449311137294459,
# 'j': 15.165781517623724}
您的原始示例还有一些其他问题。 col
是列名而不是行索引,因此它需要索引数据框的第二个维度(即 .loc[:, col]
而不是 .loc[col]
)。另外,最后一行有一个额外的尾随括号。
编辑:
您 可以 也可以通过 .apply
ing np.trapz
到每一列直接生成输出 DataFrame
,例如:
work_done = forces.apply(np.trapz, axis=0, args=(forces.index,))
但是,这并不是真正的 'proper' 向量化 - 您仍然在每一列上分别调用 np.trapz
。您可以通过比较 .apply
版本与直接调用 np.trapz
的速度来看到这一点:
In [1]: %timeit forces.apply(np.trapz, axis=0, args=(forces.index,))
1000 loops, best of 3: 582 µs per loop
In [2]: %timeit np.trapz(forces, x=forces.index, axis=0)
The slowest run took 6.04 times longer than the fastest. This could mean that an
intermediate result is being cached
10000 loops, best of 3: 53.4 µs per loop
这不是一个完全公平的比较,因为第二个版本排除了从输出 numpy 数组构造 DataFrame
所花费的额外时间,但这应该仍然小于执行实际集成。
以下是使用梯形法则沿数据框列获取累积积分的方法。或者,下面创建一个 pandas.Series 方法来选择梯形、Simpson 或 Romberger 的规则 (source):
import pandas as pd
from scipy import integrate
import numpy as np
#%% Setup Functions
def integrate_method(self, how='trapz', unit='s'):
'''Numerically integrate the time series.
@param how: the method to use (trapz by default)
@return
Available methods:
* trapz - trapezoidal
* cumtrapz - cumulative trapezoidal
* simps - Simpson's rule
* romb - Romberger's rule
See http://docs.scipy.org/doc/scipy/reference/integrate.html for the method details.
or the source code
https://github.com/scipy/scipy/blob/master/scipy/integrate/quadrature.py
'''
available_rules = set(['trapz', 'cumtrapz', 'simps', 'romb'])
if how in available_rules:
rule = integrate.__getattribute__(how)
else:
print('Unsupported integration rule: %s' % (how))
print('Expecting one of these sample-based integration rules: %s' % (str(list(available_rules))))
raise AttributeError
if how is 'cumtrapz':
result = rule(self.values)
result = np.insert(result, 0, 0, axis=0)
else:
result = rule(self.values)
return result
pd.Series.integrate = integrate_method
#%% Setup (random) data
gen = np.random.RandomState(0)
x = gen.randn(100, 10)
names = [chr(97 + i) for i in range(10)]
df = pd.DataFrame(x, columns=names)
#%% Cummulative Integral
df_cummulative_integral = df.apply(lambda x: x.integrate('cumtrapz'))
df_integral = df.apply(lambda x: x.integrate('trapz'))
df_do_they_match = df_cummulative_integral.tail(1).round(3) == df_integral.round(3)
if df_do_they_match.all().all():
print("Trapz produces the last row of cumtrapz")
我有一个 DataFrame
的力-位移数据。位移数组已设置为DataFrame
索引,列是我针对不同测试的各种力曲线。
如何计算完成的工作(即 "the area under the curve")?
我查看了 numpy.trapz
,它似乎满足了我的需要,但我认为我可以避免像这样循环遍历每一列:
import numpy as np
import pandas as pd
forces = pd.read_csv(...)
work_done = {}
for col in forces.columns:
work_done[col] = np.trapz(forces.loc[col], forces.index))
我希望创建一个新的 DataFrame
曲线下区域而不是 dict
,并认为 DataFrame.apply()
或其他可能合适但不知道从哪里开始寻找。
简而言之:
- 我可以避免循环吗?
- 我可以创建一个
DataFrame
的工作直接完成吗?
在此先感谢您的帮助。
您可以通过将整个 DataFrame
传递给 np.trapz
并指定 axis=
参数来对其进行向量化,例如:
import numpy as np
import pandas as pd
# some random input data
gen = np.random.RandomState(0)
x = gen.randn(100, 10)
names = [chr(97 + i) for i in range(10)]
forces = pd.DataFrame(x, columns=names)
# vectorized version
wrk = np.trapz(forces, x=forces.index, axis=0)
work_done = pd.DataFrame(wrk[None, :], columns=forces.columns)
# non-vectorized version for comparison
work_done2 = {}
for col in forces.columns:
work_done2.update({col:np.trapz(forces.loc[:, col], forces.index)})
这些给出以下输出:
from pprint import pprint
pprint(work_done.T)
# 0
# a -24.331560
# b -10.347663
# c 4.662212
# d -12.536040
# e -10.276861
# f 3.406740
# g -3.712674
# h -9.508454
# i -1.044931
# j 15.165782
pprint(work_done2)
# {'a': -24.331559643023006,
# 'b': -10.347663159421426,
# 'c': 4.6622123535050459,
# 'd': -12.536039649161403,
# 'e': -10.276861220217308,
# 'f': 3.4067399176289994,
# 'g': -3.7126739591045541,
# 'h': -9.5084536839888187,
# 'i': -1.0449311137294459,
# 'j': 15.165781517623724}
您的原始示例还有一些其他问题。 col
是列名而不是行索引,因此它需要索引数据框的第二个维度(即 .loc[:, col]
而不是 .loc[col]
)。另外,最后一行有一个额外的尾随括号。
编辑:
您 可以 也可以通过 .apply
ing np.trapz
到每一列直接生成输出 DataFrame
,例如:
work_done = forces.apply(np.trapz, axis=0, args=(forces.index,))
但是,这并不是真正的 'proper' 向量化 - 您仍然在每一列上分别调用 np.trapz
。您可以通过比较 .apply
版本与直接调用 np.trapz
的速度来看到这一点:
In [1]: %timeit forces.apply(np.trapz, axis=0, args=(forces.index,))
1000 loops, best of 3: 582 µs per loop
In [2]: %timeit np.trapz(forces, x=forces.index, axis=0)
The slowest run took 6.04 times longer than the fastest. This could mean that an
intermediate result is being cached
10000 loops, best of 3: 53.4 µs per loop
这不是一个完全公平的比较,因为第二个版本排除了从输出 numpy 数组构造 DataFrame
所花费的额外时间,但这应该仍然小于执行实际集成。
以下是使用梯形法则沿数据框列获取累积积分的方法。或者,下面创建一个 pandas.Series 方法来选择梯形、Simpson 或 Romberger 的规则 (source):
import pandas as pd
from scipy import integrate
import numpy as np
#%% Setup Functions
def integrate_method(self, how='trapz', unit='s'):
'''Numerically integrate the time series.
@param how: the method to use (trapz by default)
@return
Available methods:
* trapz - trapezoidal
* cumtrapz - cumulative trapezoidal
* simps - Simpson's rule
* romb - Romberger's rule
See http://docs.scipy.org/doc/scipy/reference/integrate.html for the method details.
or the source code
https://github.com/scipy/scipy/blob/master/scipy/integrate/quadrature.py
'''
available_rules = set(['trapz', 'cumtrapz', 'simps', 'romb'])
if how in available_rules:
rule = integrate.__getattribute__(how)
else:
print('Unsupported integration rule: %s' % (how))
print('Expecting one of these sample-based integration rules: %s' % (str(list(available_rules))))
raise AttributeError
if how is 'cumtrapz':
result = rule(self.values)
result = np.insert(result, 0, 0, axis=0)
else:
result = rule(self.values)
return result
pd.Series.integrate = integrate_method
#%% Setup (random) data
gen = np.random.RandomState(0)
x = gen.randn(100, 10)
names = [chr(97 + i) for i in range(10)]
df = pd.DataFrame(x, columns=names)
#%% Cummulative Integral
df_cummulative_integral = df.apply(lambda x: x.integrate('cumtrapz'))
df_integral = df.apply(lambda x: x.integrate('trapz'))
df_do_they_match = df_cummulative_integral.tail(1).round(3) == df_integral.round(3)
if df_do_they_match.all().all():
print("Trapz produces the last row of cumtrapz")