SVM - 如何向量化核化的 gram 矩阵?
SVM - How can I vectorize a kernalized gram matrix?
我在 python 中使用 cvxopt qp 求解器实现了一个支持向量机,我需要计算两个向量的克矩阵,每个元素都有一个核函数。我使用 for 循环正确地实现了它,但这种策略需要大量计算。我想矢量化代码。
示例:
这是我写的:
K = np.array( [kernel(X[i], X[j],poly=poly_kernel)
for j in range(m)
for i in range(m)]).reshape((m, m))
如何在没有 for 循环的情况下对上述代码进行矢量化以更快地获得相同的结果?
核函数计算高斯核。
Here is a quick explanation of an svm with kernel trick.第二页解释了这个问题。
这是我的完整 code 上下文。
编辑:这是一个快速代码片段,它运行我需要以非矢量化形式矢量化的内容
from sklearn.datasets import make_gaussian_quantiles;
import numpy as np;
X,y = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=2, shuffle=True, random_state=5);
m = X.shape[0];
def kernel(a,b,d=20,poly=True,sigma=0.5):
if (poly):
return np.inner(a,b) ** d;
else:
return np.exp(-np.linalg.norm((a - b) ** 2)/sigma**2)
# Need to vectorize these loops
K = np.array([kernel(X[i], X[j],poly=False)
for j in range(m)
for i in range(m)]).reshape((m, m))
谢谢!
这是矢量化版本。非多边形分支有两种变体,一种是直接的,一种是节省内存的,以防特征数量很大:
from sklearn.datasets import make_gaussian_quantiles;
import numpy as np;
X,y = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=2, shuffle=True, random_state=5);
Y,_ = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=200, n_features=2, n_classes=2, shuffle=True, random_state=2);
m = X.shape[0];
n = Y.shape[0]
def kernel(a,b,d=20,poly=True,sigma=0.5):
if (poly):
return np.inner(a,b) ** d;
else:
return np.exp(-np.linalg.norm((a - b) ** 2)/sigma**2)
# Need to vectorize these loops
POLY = False
LOW_MEM = 0
K = np.array([kernel(X[i], Y[j], poly=POLY)
for i in range(m)
for j in range(n)]).reshape((m, n))
def kernel_v(X, Y=None, d=20, poly=True, sigma=0.5):
Z = X if Y is None else Y
if poly:
return np.einsum('ik,jk', X, Z)**d
elif X.shape[1] < LOW_MEM:
return np.exp(-np.sqrt(((X[:, None, :] - Z[None, :, :])**4).sum(axis=-1)) / sigma**2)
elif Y is None or Y is X:
X2 = X*X
H = np.einsum('ij,ij->i', X2, X2) + np.einsum('ik,jk', X2, 3*X2) - np.einsum('ik,jk', X2*X, 4*X)
return np.exp(-np.sqrt(np.maximum(0, H+H.T)) / sigma**2)
else:
X2, Y2 = X*X, Y*Y
E = np.einsum('ik,jk', X2, 6*Y2) - np.einsum('ik,jk', X2*X, 4*Y) - np.einsum('ik,jk', X, 4*Y2*Y)
E += np.add.outer(np.einsum('ij,ij->i', X2, X2), np.einsum('ij,ij->i', Y2, Y2))
return np.exp(-np.sqrt(np.maximum(0, E)) / sigma**2)
print(np.allclose(K, kernel_v(X, Y, poly=POLY)))
我在 python 中使用 cvxopt qp 求解器实现了一个支持向量机,我需要计算两个向量的克矩阵,每个元素都有一个核函数。我使用 for 循环正确地实现了它,但这种策略需要大量计算。我想矢量化代码。
示例:
这是我写的:
K = np.array( [kernel(X[i], X[j],poly=poly_kernel)
for j in range(m)
for i in range(m)]).reshape((m, m))
如何在没有 for 循环的情况下对上述代码进行矢量化以更快地获得相同的结果?
核函数计算高斯核。
Here is a quick explanation of an svm with kernel trick.第二页解释了这个问题。
这是我的完整 code 上下文。
编辑:这是一个快速代码片段,它运行我需要以非矢量化形式矢量化的内容
from sklearn.datasets import make_gaussian_quantiles;
import numpy as np;
X,y = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=2, shuffle=True, random_state=5);
m = X.shape[0];
def kernel(a,b,d=20,poly=True,sigma=0.5):
if (poly):
return np.inner(a,b) ** d;
else:
return np.exp(-np.linalg.norm((a - b) ** 2)/sigma**2)
# Need to vectorize these loops
K = np.array([kernel(X[i], X[j],poly=False)
for j in range(m)
for i in range(m)]).reshape((m, m))
谢谢!
这是矢量化版本。非多边形分支有两种变体,一种是直接的,一种是节省内存的,以防特征数量很大:
from sklearn.datasets import make_gaussian_quantiles;
import numpy as np;
X,y = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=2, shuffle=True, random_state=5);
Y,_ = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=200, n_features=2, n_classes=2, shuffle=True, random_state=2);
m = X.shape[0];
n = Y.shape[0]
def kernel(a,b,d=20,poly=True,sigma=0.5):
if (poly):
return np.inner(a,b) ** d;
else:
return np.exp(-np.linalg.norm((a - b) ** 2)/sigma**2)
# Need to vectorize these loops
POLY = False
LOW_MEM = 0
K = np.array([kernel(X[i], Y[j], poly=POLY)
for i in range(m)
for j in range(n)]).reshape((m, n))
def kernel_v(X, Y=None, d=20, poly=True, sigma=0.5):
Z = X if Y is None else Y
if poly:
return np.einsum('ik,jk', X, Z)**d
elif X.shape[1] < LOW_MEM:
return np.exp(-np.sqrt(((X[:, None, :] - Z[None, :, :])**4).sum(axis=-1)) / sigma**2)
elif Y is None or Y is X:
X2 = X*X
H = np.einsum('ij,ij->i', X2, X2) + np.einsum('ik,jk', X2, 3*X2) - np.einsum('ik,jk', X2*X, 4*X)
return np.exp(-np.sqrt(np.maximum(0, H+H.T)) / sigma**2)
else:
X2, Y2 = X*X, Y*Y
E = np.einsum('ik,jk', X2, 6*Y2) - np.einsum('ik,jk', X2*X, 4*Y) - np.einsum('ik,jk', X, 4*Y2*Y)
E += np.add.outer(np.einsum('ij,ij->i', X2, X2), np.einsum('ij,ij->i', Y2, Y2))
return np.exp(-np.sqrt(np.maximum(0, E)) / sigma**2)
print(np.allclose(K, kernel_v(X, Y, poly=POLY)))