为什么以这两种不同的方式在 Python 中使用 SVM 会给出截然不同的准确率分数?
Why using SVM in Python in these two different ways gives a very different accuracy scores?
使用Python和SVM,我应用了这两段代码:
首先,我将此代码应用于数据集
from sklearn.metrics import confusion_matrix
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.svm import LinearSVC
model = LinearSVC(class_weight='balanced',C=0.01, penalty='l2').fit(X_, y)
y_preds = model.predict(X_)
report = classification_report( y, y_preds )
print(report)
print(cohen_kappa_score(y, y_preds),'\n', accuracy_score(y, y_preds), \n',confusion_matrix(y, y_preds))
这给了我这样的准确性:0.9485714285714286
其次,我再次将此代码应用于完全相同的数据集
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
models = [
LinearSVC(class_weight='balanced',C=0.01, penalty='l2', loss='squared_hinge'),
]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, X_, y, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
cv_df.groupby('model_name').accuracy.mean()
精度不同:0.797090
我的错误在哪里?
如果有的话,哪个代码是正确的?
第二个代码如何计算交叉验证后的准确率和召回率?
在第一个代码中,您只进行了 1 次预测和准确度计算。在第二个代码中,您进行了 5 次预测和准确性计算(使用不同的数据集块),然后获得 mean/average 的准确性分数。换句话说,第二个代码给出了更可靠的准确度分数。
至于你的另一个问题,如果你想对多个指标进行交叉验证,你可以使用 cross_validate()
而不是 cross_val_score()
:
scores = cross_validate(model, X, y, scoring=('precision', 'recall'))
print(scores['precision'])
print(scores['recall'])
使用Python和SVM,我应用了这两段代码:
首先,我将此代码应用于数据集
from sklearn.metrics import confusion_matrix
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.svm import LinearSVC
model = LinearSVC(class_weight='balanced',C=0.01, penalty='l2').fit(X_, y)
y_preds = model.predict(X_)
report = classification_report( y, y_preds )
print(report)
print(cohen_kappa_score(y, y_preds),'\n', accuracy_score(y, y_preds), \n',confusion_matrix(y, y_preds))
这给了我这样的准确性:0.9485714285714286
其次,我再次将此代码应用于完全相同的数据集
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
models = [
LinearSVC(class_weight='balanced',C=0.01, penalty='l2', loss='squared_hinge'),
]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, X_, y, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
cv_df.groupby('model_name').accuracy.mean()
精度不同:0.797090
我的错误在哪里?
如果有的话,哪个代码是正确的?
第二个代码如何计算交叉验证后的准确率和召回率?
在第一个代码中,您只进行了 1 次预测和准确度计算。在第二个代码中,您进行了 5 次预测和准确性计算(使用不同的数据集块),然后获得 mean/average 的准确性分数。换句话说,第二个代码给出了更可靠的准确度分数。
至于你的另一个问题,如果你想对多个指标进行交叉验证,你可以使用 cross_validate()
而不是 cross_val_score()
:
scores = cross_validate(model, X, y, scoring=('precision', 'recall'))
print(scores['precision'])
print(scores['recall'])