如何在使用 pandas 滚动相关时解决不一致的结果?

How to tackle inconsistent results while using pandas rolling correlation?

首先让我说,为了重现问题,我需要大量数据,这是问题的一部分,我无法预测异常情况何时会出现。无论如何,数据太大(~13k 行,2 列)无法粘贴到问题中,我在 [=103] 的末尾添加了一个 pastebin link =].


过去几天 pandas.core.window.rolling.Rolling.corr 我遇到了一个特殊问题。我有一个数据集,我试图在其中计算滚动相关性。这是问题所在:

While calculating rolling (window_size=100) correlations between two columns (a and b): some indices (one such index is 12981) give near 0 values (of order 1e-10), but it should ideally return nan or inf, (because all values in one column are constant). However, if I just calculate standalone correlation pertaining to that index, (i.e. last 100 rows of data including the said index), or perform the rolling calculations on lesser amount of rows (e.g. 300 or 1000 as opposed to 13k), I get the correct result (i.e. nan or inf.)

期望:

>>> df = pd.read_csv('sample_corr_data.csv') # link at the end,  ## columns = ['a', 'b']
>>> df.a.tail(100).value_counts()

 0.000000    86
-0.000029     3
 0.000029     3
-0.000029     2
 0.000029     2
-0.000029     2
 0.000029     2
Name: a, dtype: int64

>>> df.b.tail(100).value_counts()     # all 100 values are same
 
6.0    100
Name: b, dtype: int64

>>> df.a.tail(100).corr(df.b.tail(100))
nan                                      # expected, because column 'b' has same value throughout

# Made sure of this using,
# 1. np.corrcoef, because pandas uses this internally to calculate pearson moments
>>> np.corrcoef(df.a.tail(100), df.b.tail(100))[0, 1]
nan

# 2. using custom function
>>> def pearson(a, b):
        n = a.size
        num = n*np.nansum(a*b) - np.nansum(a)*np.nansum(b)
        den = (n*np.nansum((a**2)) - np.nansum(a)**2)*(n*np.nansum(b**2) - np.nansum(b)**2)
        return num/np.sqrt(den) if den * np.isfinite(den*num) else np.nan

>>> pearson(df.a.tail(100), df.b.tail(100))
nan

现在,现实:

>>> df.a.rolling(100).corr(df.b).tail(3)
 
12979    7.761921e-07
12980    5.460717e-07
12981    2.755881e-10                    # This should have been NaN/inf !!

## Furthermore!!

>>> debug = df.tail(300)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979    7.761921e-07
12980    5.460717e-07
12981            -inf                    # Got -inf, fine
dtype: float64

>>> debug = df.tail(3000)
>>> debug.a.rolling(100).corr(debug.b).tail(3)
 
12979    7.761921e-07
12980    5.460717e-07
12981             inf                     # Got +inf, still acceptable
dtype: float64

这一直持续到 9369 行:

>>> debug = df.tail(9369)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979    7.761921e-07
12980    5.460717e-07
12981             inf
dtype: float64

# then
>>> debug = df.tail(9370)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979    7.761921e-07
12980    5.460717e-07
12981    4.719615e-10                    # SPOOKY ACTION IN DISTANCE!!!
dtype: float64

>>> debug = df.tail(10000)
>>> debug.a.rolling(100).corr(debug.b).tail(3)
 
12979    7.761921e-07
12980    5.460717e-07
12981    1.198994e-10                    # SPOOKY ACTION IN DISTANCE!!!    
dtype: float64

当前解决方法

>>> df.a.rolling(100).apply(lambda x: x.corr(df.b.reindex(x.index))).tail(3)   # PREDICTABLY, VERY SLOW!

12979    7.761921e-07
12980    5.460717e-07
12981             NaN
Name: a, dtype: float64

# again this checks out using other methods,
>>> df.a.rolling(100).apply(lambda x: np.corrcoef(x, df.b.reindex(x.index))[0, 1]).tail(3)
 
12979    7.761921e-07
12980    5.460717e-07
12981             NaN
Name: a, dtype: float64

>>> df.a.rolling(100).apply(lambda x: pearson(x, df.b.reindex(x.index))).tail(3)

12979    7.761921e-07
12980    5.460717e-07
12981             NaN
Name: a, dtype: float64

据我了解,series.rolling(n).corr(other_series) 的结果应符合以下内容:

>>> def rolling_corr(series, other_series, n=100):
        return pd.Series(
                    [np.nan]*(n-1) + [series[i-n: i].corr(other_series[i-n:i]) 
                    for i in range (n, series.size+1)]
        )

>>> rolling_corr(df.a, df.b).tail(3)

12979    7.761921e-07
12980    5.460717e-07
12981             NaN

首先我认为这是一个 floating-point arithmetic 问题(因为最初,在某些情况下,我可以通过将列 'a' 四舍五入到小数点后 5 位,或转换为 float32 来解决这个问题), 但在那种情况下,无论使用的样本数量如何,它都会存在。因此,rolling 一定存在一些问题,或者至少 rolling 会导致 floating-point 问题,具体取决于数据的大小。我检查了 rolling.corr 的源代码,但找不到任何可以解释这种不一致的内容。现在我很担心,有多少过去的代码受到这个问题的困扰。

这背后的原因是什么?以及如何解决这个问题?如果发生这种情况是因为 pandas 更喜欢速度而不是准确性(如 所建议的那样),这是否意味着我永远无法可靠地对大样本使用 pandas.rolling 操作?我如何知道出现这种不一致的大小?


sample_corr_data.csv: https://pastebin.com/jXXHSv3r

测试于

注意:在上述索引处有不同的 OS return 不同的值,但都是有限的并且接近 0.

如果你计算你用滚动总和替换皮尔逊公式中的总和会怎样


def rolling_pearson(a, b, n):
    a_sum = a.rolling(n).sum()
    b_sum = b.rolling(n).sum()
    ab_sum = (a*b).rolling(n).sum()
    aa_sum = (a**2).rolling(n).sum()
    bb_sum = (b**2).rolling(n).sum();
    
    num = n * ab_sum - a_sum * b_sum;
    den = (n*aa_sum - a_sum**2) * (n * bb_sum - b_sum**2)
    return num / den**(0.5)

rolling_pearson(df.a, df.b, 100)

             ...     
12977    1.109077e-06
12978    9.555249e-07
12979    7.761921e-07
12980    5.460717e-07
12981             inf
Length: 12982, dtype: float64

为什么会这样

为了回答这个问题,我需要检查实现。因为 b 的最后 100 个样本的方差确实为零,并且滚动相关性计算为 a.cov(b) / (a.var() * b.var())**0.5.

经过一番搜索,我找到了滚动方差实现 here, the method they are using is the Welford's online algorithm。这个算法很好,因为您可以仅使用一次乘法来添加一个样本(与使用累加和的方法相同),并且您可以使用单个整数除法进行计算。这里改写成python.

def welford_add(existingAggregate, newValue):
    if pd.isna(newValue):
        return s
    (count, mean, M2) = existingAggregate
    count += 1
    delta = newValue - mean
    mean += delta / count
    delta2 = newValue - mean
    M2 += delta * delta2
    return (count, mean, M2)
def welford_remove(existingAggregate, newValue):
    if pd.isna(newValue):
        return s
    (count, mean, M2) = existingAggregate
    count -= 1
    delta = newValue - mean
    mean -= delta / count
    delta2 = newValue - mean
    M2 -= delta * delta2
    return (count, mean, M2)
def finalize(existingAggregate):
    (count, mean, M2) = existingAggregate
    (mean, variance, sampleVariance) = (mean, 
            M2 / count if count > 0 else None, 
            M2 / (count - 1) if count > 1 else None)
    return (mean, variance, sampleVariance)

在 pandas 实现中他们提到了 Kahan's summation,这对于在加法中获得更好的精度很重要,但结果并没有因此得到改善(我没有检查是否是否正确实施)。

将 Welford 算法应用于 n=100

s = (0,0,0)
for i in range(len(df.b)):
    if i >= n:
        s = welford_remove(s, df.b[i-n])
    s = welford_add(s, df.b[i])
finalize(s)

给出

(6.000000000000152, 4.7853099260919405e-12, 4.8336463899918594e-12)

df.b.rolling(100).var() 给出

0                 NaN
1                 NaN
2                 NaN
3                 NaN
4                 NaN
             ...     
12977    6.206061e-01
12978    4.703030e-01
12979    3.167677e-01
12980    1.600000e-01
12981    6.487273e-12
Name: b, Length: 12982, dtype: float64

误差 6.4e-12 略高于直接应用 Welford 方法给出的 4.83e-12

另一方面,(df.b**2).rolling(n).sum()-df.b.rolling(n).sum()**2/n 为最后一个条目给出 0.0。

0          NaN
1          NaN
2          NaN
3          NaN
4          NaN
         ...  
12977    61.44
12978    46.56
12979    31.36
12980    15.84
12981     0.00
Name: b, Length: 12982, dtype: float64

希望这个解释能让你满意:)