拆分数据框
split a dataframe
打印 (df)
A B
0 10
1 30
2 50
3 20
4 10
5 30
A B
0 10
1 30
A B
2 50
A B
3 20
4 10
5 30
您可以对 B 列的累计和使用 pd.cut:
th = 50
# find the cumulative sum of B
cumsum = df.B.cumsum()
# create the bins with spacing of th (threshold)
bins = list(range(0, cumsum.max() + 1, th))
# group by (split by) the bins
groups = pd.cut(cumsum, bins)
for key, group in df.groupby(groups):
print(group)
print()
输出
A B
0 0 10
1 1 30
A B
2 2 50
A B
3 3 20
4 4 10
5 5 30
这里有一个方法使用 numba
来加速我们的 for loop
:
我们检查何时达到限制并重置 total
计数并分配一个新的 group
:
from numba import njit
@njit
def cumsum_reset(array, limit):
total = 0
counter = 0
groups = np.empty(array.shape[0])
for idx, i in enumerate(array):
total += i
if total >= limit or array[idx-1] == limit:
counter += 1
groups[idx] = counter
total = 0
else:
groups[idx] = counter
return groups
grps = cumsum_reset(df['B'].to_numpy(), 50)
for _, grp in df.groupby(grps):
print(grp, '\n')
输出
A B
0 0 10
1 1 30
A B
2 2 50
A B
3 3 20
4 4 10
5 5 30
时间:
# create dataframe of 600k rows
dfbig = pd.concat([df]*100000, ignore_index=True)
dfbig.shape
(600000, 2)
# Erfan
%%timeit
cumsum_reset(dfbig['B'].to_numpy(), 50)
4.25 ms ± 46.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# Daniel Mesejo
def daniel_mesejo(th, column):
cumsum = column.cumsum()
bins = list(range(0, cumsum.max() + 1, th))
groups = pd.cut(cumsum, bins)
return groups
%%timeit
daniel_mesejo(50, dfbig['B'])
10.3 s ± 2.17 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
结论,numba
for 循环快了 24~ x。
打印 (df)
A B
0 10
1 30
2 50
3 20
4 10
5 30
A B
0 10
1 30
A B
2 50
A B
3 20
4 10
5 30
您可以对 B 列的累计和使用 pd.cut:
th = 50
# find the cumulative sum of B
cumsum = df.B.cumsum()
# create the bins with spacing of th (threshold)
bins = list(range(0, cumsum.max() + 1, th))
# group by (split by) the bins
groups = pd.cut(cumsum, bins)
for key, group in df.groupby(groups):
print(group)
print()
输出
A B
0 0 10
1 1 30
A B
2 2 50
A B
3 3 20
4 4 10
5 5 30
这里有一个方法使用 numba
来加速我们的 for loop
:
我们检查何时达到限制并重置 total
计数并分配一个新的 group
:
from numba import njit
@njit
def cumsum_reset(array, limit):
total = 0
counter = 0
groups = np.empty(array.shape[0])
for idx, i in enumerate(array):
total += i
if total >= limit or array[idx-1] == limit:
counter += 1
groups[idx] = counter
total = 0
else:
groups[idx] = counter
return groups
grps = cumsum_reset(df['B'].to_numpy(), 50)
for _, grp in df.groupby(grps):
print(grp, '\n')
输出
A B
0 0 10
1 1 30
A B
2 2 50
A B
3 3 20
4 4 10
5 5 30
时间:
# create dataframe of 600k rows
dfbig = pd.concat([df]*100000, ignore_index=True)
dfbig.shape
(600000, 2)
# Erfan
%%timeit
cumsum_reset(dfbig['B'].to_numpy(), 50)
4.25 ms ± 46.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# Daniel Mesejo
def daniel_mesejo(th, column):
cumsum = column.cumsum()
bins = list(range(0, cumsum.max() + 1, th))
groups = pd.cut(cumsum, bins)
return groups
%%timeit
daniel_mesejo(50, dfbig['B'])
10.3 s ± 2.17 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
结论,numba
for 循环快了 24~ x。