为什么 PCA 图像与原始图像完全不同?

Why the PCA image doesnt resemble the original image at all?

我正在尝试在没有任何图像降维库的情况下实现 PCA。我尝试了 O'Reilly Computer Vision 书中的代码并在示例 lenna 图片上实现了它:

    from PIL import Image
    from numpy import *

    def pca(X):
        num_data, dim = X.shape

        mean_X = X.mean(axis=0)
        X = X - mean_X

        if dim > num_data:
            # PCA compact trick
            M = np.dot(X, X.T) # covariance matrix
            e, U = np.linalg.eigh(M) # calculate eigenvalues an deigenvectors
            tmp = np.dot(X.T, U).T
            V = tmp[::-1] # reverse since the last eigenvectors are the ones we want
            S = np.sqrt(e)[::-1] #reverse since the last eigenvalues are in increasing order
            for i in range(V.shape[1]):
                V[:,i] /= S
        else:
            # normal PCA, SVD method
            U,S,V = np.linalg.svd(X)
            V = V[:num_data] # only makes sense to return the first num_data
        return V, S, mean_X
img=color.rgb2gray(io.imread('D:\lenna.png'))
x,y,z=pca(img)
plt.imshow(x)

但是 pca 的图像图看起来根本不像原始图像。 据我所知,PCA 有点降低图像尺寸,但它仍会以某种方式类似于原始图像,但细节较少。代码有什么问题?

好吧,你的代码本身没有任何问题,但如果我确实理解你真正想做的事情,你就不会显示正确的东西!

我会为你的问题写下以下内容:

def pca(X, number_of_pcs):
    num_data, dim = X.shape

    mean_X = X.mean(axis=0)
    X = X - mean_X

    if dim > num_data:
        # PCA compact trick
        M = np.dot(X, X.T) # covariance matrix
        e, U = np.linalg.eigh(M) # calculate eigenvalues an deigenvectors
        tmp = np.dot(X.T, U).T
        V = tmp[::-1] # reverse since the last eigenvectors are the ones we want
        S = np.sqrt(e)[::-1] #reverse since the last eigenvalues are in increasing order
        for i in range(V.shape[1]):
            V[:,i] /= S

        return V, S, mean_X

    else:
        # normal PCA, SVD method
        U, S, V = np.linalg.svd(X, full_matrices=False)

        # reconstruct the image using U, S and V
        # otherwise you're just outputting the eigenvectors of X*X^T
        V = V.T
        S = np.diag(S)
        X_hat = np.dot(U[:, :number_of_pcs], np.dot(S[:number_of_pcs, :number_of_pcs], V[:,:number_of_pcs].T))      

        return X_hat, S, mean_X

The change here lies in the fact that we want to reconstruct the image using a given number of eigenvectors (determined by number_of_pcs).

要记住的是,在np.linalg.svd中,U的列是X.X^T的特征向量。

这样做时,我们得到以下结果(此处使用 1 和 10 主成分显示):


X_hat, S, mean_X = pca(img, 1)
plt.imshow(X_hat)


X_hat, S, mean_X = pca(img, 10)
plt.imshow(X_hat)

PS:请注意,由于matplotlib.pyplot,图片未以灰度显示,但这是一个非常小的问题。