SKLearn 感知器对于稀疏和密集的表现不同
SKLearn Perceptron behaving differently for sparse and dense
感知器在给定密集格式的矩阵时,与以稀疏格式给出相同的矩阵相比,会给出不同的结果。我认为这可能是一个洗牌问题,所以我 运行 使用 sklearn.model_selection
中的 cross_validate
进行交叉验证,但没有成功。
讨论了类似的问题here。但是给出了一些理由。这里有什么道理吗?
仅供参考,我使用 Perceptron 的参数是:
penalty='l2', alpha=0.0001, fit_intercept=True, max_iter=10000, tol=1e-8, shuffle=True, verbose=0, eta0=1.0, n_jobs=1, random_state=0, class_weight=None, warm_start=False, n_iter=None
我正在使用 sparse.csr_matrix
将密集矩阵转换为稀疏矩阵作为已接受的答案 here
这里有道理
Perceptron
shares大部分代码用SGDClassifier
Perceptron and SGDClassifier share the same underlying implementation. In fact, Perceptron() is equivalent to SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None).
和 SGDClassifier
是 better documented:
Note: The sparse implementation produces slightly different results than the dense implementation due to a shrunk learning rate for the intercept.
我们有更多详细信息latter:
In the case of sparse feature vectors, the intercept is updated with a smaller learning rate (multiplied by 0.01) to account for the fact that it is updated more frequently.
请注意,此实现细节来自 Leon Bottou:
The learning rate for the bias is multiplied by 0.01 because this frequently improves the condition number.
为了完整性,在 scikit-learn code:
SPARSE_INTERCEPT_DECAY = 0.01
# For sparse data intercept updates are scaled by this decay factor to avoid
# intercept oscillation.
奖金示例:
import numpy as np
import scipy.sparse as sp
from sklearn.linear_model import Perceptron
np.random.seed(42)
n_samples, n_features = 1000, 10
X_dense = np.random.randn(n_samples, n_features)
X_csr = sp.csr_matrix(X_dense)
y = np.random.randint(2, size=n_samples)
for X in [X_dense, X_csr]:
model = Perceptron(penalty='l2', alpha=0.0001, fit_intercept=True,
max_iter=10000, tol=1e-8, shuffle=True, verbose=0,
eta0=1.0, n_jobs=1, random_state=0, class_weight=None,
warm_start=False, n_iter=None)
model.fit(X, y)
print(model.coef_)
您可以检查系数是否不同。
将 fit_intercept
更改为 False
使系数相等,但拟合可能较差。
感知器在给定密集格式的矩阵时,与以稀疏格式给出相同的矩阵相比,会给出不同的结果。我认为这可能是一个洗牌问题,所以我 运行 使用 sklearn.model_selection
中的 cross_validate
进行交叉验证,但没有成功。
讨论了类似的问题here。但是给出了一些理由。这里有什么道理吗?
仅供参考,我使用 Perceptron 的参数是:
penalty='l2', alpha=0.0001, fit_intercept=True, max_iter=10000, tol=1e-8, shuffle=True, verbose=0, eta0=1.0, n_jobs=1, random_state=0, class_weight=None, warm_start=False, n_iter=None
我正在使用 sparse.csr_matrix
将密集矩阵转换为稀疏矩阵作为已接受的答案 here
这里有道理
Perceptron
shares大部分代码用SGDClassifier
Perceptron and SGDClassifier share the same underlying implementation. In fact, Perceptron() is equivalent to SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None).
和 SGDClassifier
是 better documented:
Note: The sparse implementation produces slightly different results than the dense implementation due to a shrunk learning rate for the intercept.
我们有更多详细信息latter:
In the case of sparse feature vectors, the intercept is updated with a smaller learning rate (multiplied by 0.01) to account for the fact that it is updated more frequently.
请注意,此实现细节来自 Leon Bottou:
The learning rate for the bias is multiplied by 0.01 because this frequently improves the condition number.
为了完整性,在 scikit-learn code:
SPARSE_INTERCEPT_DECAY = 0.01
# For sparse data intercept updates are scaled by this decay factor to avoid
# intercept oscillation.
奖金示例:
import numpy as np
import scipy.sparse as sp
from sklearn.linear_model import Perceptron
np.random.seed(42)
n_samples, n_features = 1000, 10
X_dense = np.random.randn(n_samples, n_features)
X_csr = sp.csr_matrix(X_dense)
y = np.random.randint(2, size=n_samples)
for X in [X_dense, X_csr]:
model = Perceptron(penalty='l2', alpha=0.0001, fit_intercept=True,
max_iter=10000, tol=1e-8, shuffle=True, verbose=0,
eta0=1.0, n_jobs=1, random_state=0, class_weight=None,
warm_start=False, n_iter=None)
model.fit(X, y)
print(model.coef_)
您可以检查系数是否不同。
将 fit_intercept
更改为 False
使系数相等,但拟合可能较差。