如何预测特征数量是否与测试集中可用的特征数量不匹配?
How to predict if number of features are not matching with number of features available in testset?
我正在使用 pandas get_dummies
将分类变量转换为 dummy/indicator 变量,它在数据集中引入了新特征。然后我们 fit/train 这个数据集变成一个模型。
由于 X_train
和 X_test
的维度保持不变,因此当我们对测试数据进行预测时,它与测试数据 X_test
的效果很好。
现在假设我们在另一个 csv 文件中有测试数据(输出未知)。当我们使用 get_dummies
转换这组测试数据时,生成的数据集的特征数量可能与我们训练模型时使用的特征数量不同。稍后当我们将我们的模型与该数据集一起使用时,它会失败,因为测试集中的特征数量与模型不匹配。
知道我们该如何处理吗?
代码:
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
# Load the dataset
in_file = 'train.csv'
full_data = pd.read_csv(in_file)
outcomes = full_data['Survived']
features_raw = full_data.drop('Survived', axis = 1)
features = pd.get_dummies(features_raw)
features = features.fillna(0.0)
X_train, X_test, y_train, y_test = train_test_split(features, outcomes,
test_size=0.2, random_state=42)
model =
DecisionTreeClassifier(max_depth=50,min_samples_leaf=6,min_samples_split=2)
model.fit(X_train,y_train)
y_train_pred = model.predict(X_train)
#print (X_train.shape)
y_test_pred = model.predict(X_test)
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
# DOing again to test another set of data
test_data = 'test.csv'
test_data1 = pd.read_csv(test_data)
test_data2 = pd.get_dummies(test_data1)
test_data3 = test_data2.fillna(0.0)
print(test_data2.shape)
print (model.predict(test_data3))
似乎之前有人问过类似的问题,但最 efficient/easiest 的方法是遵循 Thibault Clement described here
的方法
# Get missing columns in the training test
missing_cols = set( X_train.columns ) - set( X_test.columns )
# Add a missing column in test set with default value equal to 0
for c in missing_cols:
X_test[c] = 0
# Ensure the order of column in the test set is in the same order than in train set
X_test = X_test[X_train.columns]
还值得注意的是,您的模型只能使用它所训练的特征,因此如果 X_test 与 X_train 中有额外的列而不是更少,那么必须先删除这些列预测。
我正在使用 pandas get_dummies
将分类变量转换为 dummy/indicator 变量,它在数据集中引入了新特征。然后我们 fit/train 这个数据集变成一个模型。
由于 X_train
和 X_test
的维度保持不变,因此当我们对测试数据进行预测时,它与测试数据 X_test
的效果很好。
现在假设我们在另一个 csv 文件中有测试数据(输出未知)。当我们使用 get_dummies
转换这组测试数据时,生成的数据集的特征数量可能与我们训练模型时使用的特征数量不同。稍后当我们将我们的模型与该数据集一起使用时,它会失败,因为测试集中的特征数量与模型不匹配。
知道我们该如何处理吗?
代码:
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
# Load the dataset
in_file = 'train.csv'
full_data = pd.read_csv(in_file)
outcomes = full_data['Survived']
features_raw = full_data.drop('Survived', axis = 1)
features = pd.get_dummies(features_raw)
features = features.fillna(0.0)
X_train, X_test, y_train, y_test = train_test_split(features, outcomes,
test_size=0.2, random_state=42)
model =
DecisionTreeClassifier(max_depth=50,min_samples_leaf=6,min_samples_split=2)
model.fit(X_train,y_train)
y_train_pred = model.predict(X_train)
#print (X_train.shape)
y_test_pred = model.predict(X_test)
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
# DOing again to test another set of data
test_data = 'test.csv'
test_data1 = pd.read_csv(test_data)
test_data2 = pd.get_dummies(test_data1)
test_data3 = test_data2.fillna(0.0)
print(test_data2.shape)
print (model.predict(test_data3))
似乎之前有人问过类似的问题,但最 efficient/easiest 的方法是遵循 Thibault Clement described here
的方法# Get missing columns in the training test
missing_cols = set( X_train.columns ) - set( X_test.columns )
# Add a missing column in test set with default value equal to 0
for c in missing_cols:
X_test[c] = 0
# Ensure the order of column in the test set is in the same order than in train set
X_test = X_test[X_train.columns]
还值得注意的是,您的模型只能使用它所训练的特征,因此如果 X_test 与 X_train 中有额外的列而不是更少,那么必须先删除这些列预测。