fit 或 fit_transform 如果我在整个数据集上使用 StandardScaler?

fit or fit_transform if I used StandardScaler on the entire dataset?

我有一个名为 features 的数据框,我按如下方式缩放数据:


col_names=features.columns

scaler=StandardScaler()
scaler.fit(features)
standardized_features=scaler.transform(features)
standardized_features.shape
df=pd.DataFrame(data=standardized_features,columns=col_names)

然后我将训练集和测试集拆分如下:

df_idx = df[df.Date == '1996-12-01'].index[0]
df_targets=df['Label'].values
df_features=df.drop(['Regime','Date','Label'], axis=1)

df_training_features = df.iloc[:df_idx,:].drop(['Regime','Date','Label'], axis=1)
df_validation_features = df.iloc[df_idx:, :].drop(['Regime','Date','Label'], axis=1)

df_training_targets = df['Label'].values
df_training_targets=df_training_targets[:df_idx]

df_validation_targets = df['Label'].values
df_validation_targets=df_validation_targets[df_idx:]

最后我测试了不同的方法:

scoring='f1' 
kfold = model_selection.TimeSeriesSplit(n_splits=5) 
models = []

models.append(('LR', LogisticRegression(C=1e10, class_weight = 'balanced')))
models.append(('KNN', KNeighborsClassifier()))
models.append(('GB', GradientBoostingClassifier(random_state = 42)))
models.append(('ABC', AdaBoostClassifier(random_state = 42)))
models.append(('RF', RandomForestClassifier(class_weight = 'balanced')))
models.append(('XGB', xgb.XGBClassifier(objective='binary:logistic', booster='gbtree')))

results = []
names = []
lb = preprocessing.LabelBinarizer()

for name, model in models:
    cv_results = model_selection.cross_val_score(estimator = model, X = df_training_features, 
                                                 y = lb.fit_transform(df_training_targets), cv=kfold, scoring = scoring)
    
    model.fit(df_training_features, df_training_targets) # train the model

    fpr, tpr, thresholds= metrics.roc_curve(df_training_targets,model.predict_proba(df_training_features)[:,1])
    auc = metrics.roc_auc_score(df_training_targets,model.predict(df_training_features))
    plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (name, auc))
    results.append(cv_results)
    names.append(name)
    msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
    print(msg)

我的问题是:

你必须只用训练数据来适应你的 StandScaler。 然后,通过这种标准化,你可以转换训练数据 a 和验证数据。

这样做是为了对所有输入数据保持相同的标准化。想象一下,在您的训练数据中,您有一个属性的下一个值:[0,1,2]。如果你做一个简单的规范化(它类似于标准化)你会得到类似的东西:[0, 0.5, 1].

现在假设在您的验证中您还有 3 个样本,其中一个类别具有下一个值 [0, 1, 100]。如果你适合和变形,你会[0, 0.01, 1]。这是一场灾难,因为训练模型时认为您的 1 是 0.5 缩放的。这就是您使用训练数据信息转换验证数据的原因。