DataFrame 各列的有效计数不同,按行分组

efficient count distinct across columns of DataFrame, grouped by rows

对于 DataFrame 中的每一行,跨相同 dtype 的列计算不同值的最快方法是什么(在正常的 pythonicity 范围内)?

详细信息: 我有一个 DataFrame 按主题(按行)按天(按列)的分类结果,类似于以下生成的内容。

import numpy as np
import pandas as pd

def genSampleData(custCount, dayCount, discreteChoices):
    """generate example dataset"""
    np.random.seed(123)     
    return pd.concat([
               pd.DataFrame({'custId':np.array(range(1,int(custCount)+1))}),
               pd.DataFrame(
                columns = np.array(['day%d' % x for x in range(1,int(dayCount)+1)]),
                data = np.random.choice(a=np.array(discreteChoices), 
                                        size=(int(custCount), int(dayCount)))    
               )], axis=1)

例如,如果数据集告诉我们每个顾客每次光顾商店时都点了哪种饮料,我想知道每个顾客的不同饮料数量。

# notional discrete choice outcome          
drinkOptions, drinkIndex = np.unique(['coffee','tea','juice','soda','water'], 
                                     return_inverse=True) 

# integer-coded discrete choice outcomes
d = genSampleData(2,3, drinkIndex)
d
#   custId  day1  day2  day3
#0       1     1     4     1
#1       2     3     2     1

# Count distinct choices per subject -- this is what I want to do efficiently on larger DF
d.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1)
#0    2
#1    3

# Note: I have coded the choices as `int` rather than `str` to speed up comparisons.
# To reconstruct the choice names, we could do:
# d.iloc[:,1:] = drinkOptions[d.iloc[:,1:]]

我试过的:这个用例中的数据集将有比几天更多的主题(下面的例子testDf),所以我试图找到最有效的行操作:

testDf = genSampleData(100000,3, drinkIndex)

#---- Original attempts ----
%timeit -n20 testDf.iloc[:,1:].apply(lambda x: x.nunique(), axis=1)
# I didn't wait for this to finish -- something more than 5 seconds per loop
%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(x.unique()), axis=1)
# Also too slow
%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1)
#20 loops, best of 3: 2.07 s per loop

为了改进我最初的尝试,我们注意到 pandas.DataFrame.apply() 接受参数:

If raw=True the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance

这确实减少了一半以上的运行时间:

%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1, raw=True)
#20 loops, best of 3: 721 ms per loop *best so far*

令我惊讶的是,一个纯 numpy 的解决方案,似乎与上面 raw=True 等效,实际上有点慢:

%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=1, arr = testDf.iloc[:,1:].values)
#20 loops, best of 3: 1.04 s per loop

最后,我也尝试了转置数据来做,我认为这可能更有效(至少对于DataFrame.apply(),但似乎没有有意义的区别。

%timeit -n20 testDf.iloc[:,1:].T.apply(lambda x: len(np.unique(x)), raw=True)
#20 loops, best of 3: 712 ms per loop *best so far*
%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=0, arr = testDf.iloc[:,1:].values.T)
# 20 loops, best of 3: 1.13 s per loop

到目前为止,我最好的解决方案是 df.applylen(np.unique()) 的奇怪组合,但我还应该尝试什么?

pandas.meltDataFrame.groupbygroupby.SeriesGroupBy.nunique 似乎把其他解决方案吹走了:

%timeit -n20 pd.melt(testDf, id_vars ='custId').groupby('custId').value.nunique()
#20 loops, best of 3: 67.3 ms per loop

您不需要 custId。我会 stack,然后 groupby

testDf.iloc[:, 1:].stack().groupby(level=0).nunique()

我的理解是 nunique 是针对大型系列进行优化的。在这里,你只有 3 天的时间。将每一列与其他列进行比较似乎更快:

testDf = genSampleData(100000,3, drinkIndex)
days = testDf.columns[1:]

%timeit testDf.iloc[:, 1:].stack().groupby(level=0).nunique()
10 loops, best of 3: 46.8 ms per loop

%timeit pd.melt(testDf, id_vars ='custId').groupby('custId').value.nunique()
10 loops, best of 3: 47.6 ms per loop

%%timeit
testDf['nunique'] = 1
for col1, col2 in zip(days, days[1:]):
    testDf['nunique'] += ~((testDf[[col2]].values == testDf.ix[:, 'day1':col1].values)).any(axis=1)
100 loops, best of 3: 3.83 ms per loop

当然,当您添加更多列时,它会失去优势。对于不同数量的列(相同的顺序:stack().groupby()pd.melt().groupby()和循环):

10 columns: 143ms, 161ms, 30.9ms
50 columns: 749ms, 968ms, 635ms
100 columns: 1.52s, 2.11s, 2.33s