计算 sklearn 的 PCA 的第一主成分

computing first principal component of sklearn's PCA

我有以下代码成功计算了我的数据的最大主成分:

lst = ['date', 'MA(1,9)', 'MA(1,12)', 'MA(2,9)', 'MA(2,12)', 'MA(3,9)', 'MA(3,12)', 'MOM(9)', 'MOM(12)', 'VOL(1,9)', 'VOL(1,12)', 'VOL(2,9)', 'VOL(2,12)', 'VOL(3,9)', 'VOL(3,12)']
df = pd.read_excel(filename, sheet_name='daily', header=0, names=lst)
df = df.set_index('date')
df = df.loc[start_date:end_date]
pca = PCA()
pca = pca.fit(df)
print(pca.components_)
#print(pca.explained_variance_[0])
df = pd.DataFrame(pca.transform(df), columns=['PCA%i' % i for i in range(14)], index=df.index)

有什么方法不用自己计算就能成功得到第一个主成分? (sklearn 是否有某种我找不到的属性?)

我的数据:

            MA(1,9)  MA(1,12)  MA(2,9)  MA(2,12)  MA(3,9)  MA(3,12)  MOM(9)  \
date                                                                          
1990-06-08        1         1        1         1        1         1       1   
1990-06-11        1         1        1         1        1         1       1   
1990-06-12        1         1        1         1        1         1       1   
1990-06-13        1         1        1         1        1         1       1   
1990-06-14        1         1        1         1        1         1       1   

            MOM(12)  VOL(1,9)  VOL(1,12)  VOL(2,9)  VOL(2,12)  VOL(3,9)  \
date                                                                      
1990-06-08        1         1          0         1          1         1   
1990-06-11        1         1          1         1          1         1   
1990-06-12        1         0          0         1          1         1   
1990-06-13        1         0          0         1          1         1   
1990-06-14        1         0          0         1          1         1   

            VOL(3,12)  
date                   
1990-06-08          1  
1990-06-11          1  
1990-06-12          1  
1990-06-13          1  
1990-06-14          1  

输出:

                 PCA0      PCA1      PCA2      PCA3      PCA4      PCA5  \
date                                                                     
1990-06-08 -0.707212  0.834228  0.511333  0.104279 -0.055340 -0.117740   
1990-06-11 -0.685396  1.224009 -0.059560 -0.038864 -0.011676 -0.031021   
1990-06-12 -0.737770  0.445458  1.083377  0.237313 -0.075061  0.012465   
1990-06-13 -0.737770  0.445458  1.083377  0.237313 -0.075061  0.012465   
1990-06-14 -0.737770  0.445458  1.083377  0.237313 -0.075061  0.012465   
1990-06-15 -0.715954  0.835239  0.512485  0.094170 -0.031397  0.099184   
1990-06-18 -0.715954  0.835239  0.512485  0.094170 -0.031397  0.099184   
1990-06-19 -0.702743 -0.024860  0.185254 -0.976475 -0.028151  0.090701     
...              ...       ...       ...       ...       ...       ...    
2015-05-01 -0.636410 -0.440222 -1.139295 -0.229937  0.088941 -0.055738   
2015-05-04 -0.636410 -0.440222 -1.139295 -0.229937  0.088941 -0.055738   

                PCA6      PCA7      PCA8      PCA9     PCA10     PCA11  \
date                                                                     
1990-06-08 -0.050111  0.000652  0.062524  0.066524 -0.683963  0.097497   
1990-06-11 -0.053740  0.013313  0.008949 -0.006157  0.002628 -0.010517   
1990-06-12 -0.039659 -0.029781  0.009185 -0.026395 -0.006305 -0.019026   
1990-07-19 -0.053740  0.013313  0.008949 -0.006157  0.002628 -0.010517   
1990-07-20 -0.078581  0.056345  0.386847  0.056035 -0.044696  0.013128   
...              ...       ...       ...       ...       ...       ...   
2015-05-01  0.066707  0.018254  0.009552  0.002706  0.008036  0.000745   
2015-05-04  0.066707  0.018254  0.009552  0.002706  0.008036  0.000745   

               PCA12     PCA13  
date                            
1990-06-08  0.013466 -0.020638  
...              ...       ...  
2015-05-04  0.001502  0.004461  

以上是更新代码的输出,但它似乎是错误的输出。 "first principal component" 定义为:

this transformation is defined in such a way that the first principal component >has the largest possible variance (that is, accounts for as much of the >variability in the data as possible), and each succeeding component in turn has >the highest variance possible under the constraint that it is orthogonal to the >preceding components.

简单地抓取PCA的第一列是否与上面定义的过程相同?

您始终可以使用 PCA().fit_transform(df).iloc[:, 0],这将为您提供每行第一个 PC 轴上的值。

PCA 对象有一个成员 components_,它保存调用 fit() 后的组件。

来自docs

components_ : array, shape (n_components, n_features)

Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_.

示例:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA

np.random.seed(42)

df = pd.DataFrame(np.concatenate([np.random.rand(50, 5), np.random.rand(50, 5) + 1]))

pca = PCA(n_components=2).fit(df)

print(pca.components_)

输出:特征中的两个组件 space

[[-0.43227251 -0.47497776 -0.41079902 -0.47411737 -0.44044691]
 [ 0.41214174 -0.54429826 -0.55429329  0.34990399  0.32280758]]

解释:

如文档中所述,这些向量已按 explained_variance_ 排序。这意味着通过获取第一个向量 pca.components_[0],您将获得具有最高方差的向量(由 pca.explained_variance_[0] 给出)。


这是可以形象化的。正如您在上面的代码中看到的,我们想要找到具有最高方差的两个分量 (PCA(n_components=2))。通过进一步调用 pca.transform(df) 我们所做的是将数据投影到这些组件上。这将导致大小为 (n_samples, n_components) 的矩阵 - 这也意味着我们可以绘制它。

我们还可以变换 pca.components_ 给出的向量,以便在较低维度 space 中查看这两个分量。为了使绘图更有意义,我首先将转换后的分量标准化为 1 的长度,并根据它们解释的方差进一步缩放它以突出它们的重要性。

t = pca.transform(df)
ax = plt.figure().gca()
ax.scatter(t[:,0], t[:,1], s=5)

transf_components = pca.transform(pca.components_)

for i, (var, c) in enumerate(zip(pca.explained_variance_, transf_components)):
    # The scaling of the transformed components for the purpose of visualization
    c = var * (c / np.linalg.norm(c))    
    ax.arrow(0, 0, c[0], c[1], head_width=0.06, head_length=0.08, fc='r', ec='r')
    ax.annotate('Comp. {0}'.format(i+1), xy=c+.08)

plt.show()

给出:


特别更新:

在评论区跟大家聊过之后:或许可以看看FactorAnalysis (see also):

Note that df is now a matrix with binary values (just like your original data)

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import FactorAnalysis

np.random.seed(42)

n_features = 20

# After 50 samples we "change the behavior"
df = pd.DataFrame(1*np.concatenate([np.random.rand(50, n_features) > .25, 
                                    np.random.rand(50, n_features) > .75]))

# I chose n_components here totally arbitrary (< n_features) ..

fa = FactorAnalysis(n_components=5).fit(df)
t = fa.transform(df)

ax = plt.figure().gca()
ax.plot(t[:,0])
ax.axvline(50, color='r', linestyle='--', alpha=.5) 

输出: