为什么缩放 iris 数据集会使 MAE 变得更糟?

Why is scaling the iris dataset making the MAE much worse?

此代码根据鸢尾花数据集预测萼片长度,它获得的 MAE 约为 .94

from sklearn import metrics
from sklearn.neural_network import *
from sklearn.model_selection import *
from sklearn.preprocessing import *
from sklearn import datasets

iris = datasets.load_iris()
X = iris.data[:, 1:]
y = iris.data[:, 0]  # sepal length

X_train, X_test, y_train, y_test = train_test_split(X, y)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

model = MLPRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(metrics.mean_absolute_error(y_test, y_pred))

尽管当我移除缩放线时

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

MAE 下降到 0.33。我缩放比例有误吗,为什么缩放比例让错误变得如此之高?

有趣的问题。因此,让我们测试(在适当的地方放置随机状态以获得可重现的结果)非(sklearn.neural_network.MLPRegressor)神经网络方法有和没有缩放:

from sklearn import metrics
from sklearn.neural_network import *
from sklearn.model_selection import *
from sklearn.preprocessing import *
from sklearn import datasets
import numpy as np
from sklearn.linear_model import LinearRegression

iris = datasets.load_iris()
X = iris.data[:, 1:]
y = iris.data[:, 0]  # sepal length


### pur random state for reproducibility
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1989)


lr = LinearRegression()
lr.fit(X_train, y_train)
pred = lr.predict(X_test)

# Evaluating Model's Performance
print('Mean Absolute Error NO SCALE:', metrics.mean_absolute_error(y_test, pred))
print('Mean Squared Error NO SCALE:', metrics.mean_squared_error(y_test, pred))
print('Mean Root Squared Error NO SCALE:', np.sqrt(metrics.mean_squared_error(y_test, pred)))
print('~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~')

### put random state for reproducibility
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1989)


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

lr = LinearRegression()
lr.fit(X_train, y_train)
pred = lr.predict(X_test)

# Evaluating Model's Performance
print('Mean Absolute Error YES SCALE:', metrics.mean_absolute_error(y_test, pred))
print('Mean Squared Error YES SCALE:', metrics.mean_squared_error(y_test, pred))
print('Mean Root Squared Error YES SCALE:', np.sqrt(metrics.mean_squared_error(y_test, pred)))

给出:

Mean Absolute Error NO SCALE: 0.2789437424421388
Mean Squared Error NO SCALE: 0.1191038134603132
Mean Root Squared Error NO SCALE: 0.3451142035041635
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Mean Absolute Error YES SCALE: 0.27894374244213865
Mean Squared Error YES SCALE: 0.11910381346031311
Mean Root Squared Error YES SCALE: 0.3451142035041634

好的。看起来你在扩展方面做的一切都是正确的,但处理神经网络有很多细微差别,最重要的是,可能适用于一种架构的东西可能不适用于另一种架构,所以在可能的情况下,实验将展示最好的方法。


例如,由于在训练过程中误差传播到权重的方式,目标中的大分布可能会导致大梯度,导致权重急剧变化,从而使训练不稳定或根本不收敛。

总体神经网络趋向当输入处于共同尺度并且TEND 训练得更快(max_iter 参数在这里,见下文)。接下来我们将检查...

最重要的是!转换的类型也可能很重要,标准化与规范化以及其中的类型。例如,在从 -1 到 1 TENDS 的 RNN 中,性能优于 0 - 1.


运行 您的代码还出现以下错误: _multilayer_perceptron.py:692: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (100) reached and the optimization hasn't converged yet. warnings.warn(

所以你的算法没有收敛,因此你的 MAE 很高。它正在逐步优化,100 次还不够,因此必须增加迭代次数。


接下来让 运行MLPRegressor 实验

### DO IMPORTS
from sklearn import metrics
from sklearn.neural_network import *
from sklearn.model_selection import *
from sklearn.preprocessing import *
from sklearn import datasets
import numpy as np

### GET DATASET
iris = datasets.load_iris()
X = iris.data[:, 1:]
y = iris.data[:, 0]  # sepal length

#########################################################################################
# SCALE INPUTS = NO
# SCALE TARGETS = NO
#########################################################################################

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 100)


# put random state here as well because of the way NNs get set up there is randomization within initial parameters
# max iterations for each were found manually but you can also use grid search because its basically a hyperparameter

model = MLPRegressor(random_state = 100,max_iter=450)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('----------------------------------------------------------------------')
print("SCALE INPUTS =  NO & SCALE TARGETS = NO")
print('----------------------------------------------------------------------')
print('Mean Absolute Error', metrics.mean_absolute_error(y_test,  y_pred))
print('Squared Error', metrics.mean_squared_error(y_test,  y_pred))
print('Mean Root Squared Error', np.sqrt(metrics.mean_squared_error(y_test,  y_pred)))
----------------------------------------------------------------------
SCALE INPUTS =  NO & SCALE TARGETS = NO
----------------------------------------------------------------------
Mean Absolute Error 0.25815648734192126
Squared Error 0.10196864342576142
Mean Root Squared Error 0.319325294058835

#########################################################################################
# SCALE INPUTS = YES
# SCALE TARGETS = NO
#########################################################################################

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 100)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

model = MLPRegressor(random_state = 100,max_iter=900)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('----------------------------------------------------------------------')
print("SCALE INPUTS = YES & SCALE TARGETS = NO")
print('----------------------------------------------------------------------')
print('Mean Absolute Error', metrics.mean_absolute_error(y_test,  y_pred))
print('Squared Error', metrics.mean_squared_error(y_test,  y_pred))
print('Mean Root Squared Error', np.sqrt(metrics.mean_squared_error(y_test,  y_pred)))
----------------------------------------------------------------------
SCALE INPUTS = YES & SCALE TARGETS = NO
----------------------------------------------------------------------
Mean Absolute Error 0.2699225498998305
Squared Error 0.1221046275841224
Mean Root Squared Error 0.3494347257845482

#########################################################################################
# SCALE INPUTS = NO
# SCALE TARGETS = YES
#########################################################################################

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 100)

scaler_y = StandardScaler()
y_train = scaler_y.fit_transform(y_train.reshape(-1, 1))

### NO NEED TO RESCALE since network doesnt see it
# y_test = scaler_y.transform(y_test.reshape(-1, 1))

model = MLPRegressor(random_state = 100,max_iter=500)
model.fit(X_train, y_train.ravel())
y_pred = model.predict(X_test)

### rescale predictions back to y_test scale
y_pred_rescaled_back = scaler_y.inverse_transform(y_pred.reshape(-1, 1))

print('----------------------------------------------------------------------')
print("SCALE INPUTS = NO & SCALE TARGETS = YES")
print('----------------------------------------------------------------------')
print('Mean Absolute Error', metrics.mean_absolute_error(y_test,  y_pred_rescaled_back))
print('Squared Error', metrics.mean_squared_error(y_test,  y_pred_rescaled_back))
print('Mean Root Squared Error', np.sqrt(metrics.mean_squared_error(y_test,  y_pred_rescaled_back)))
----------------------------------------------------------------------
SCALE INPUTS = NO & SCALE TARGETS = YES
----------------------------------------------------------------------
Mean Absolute Error 0.23602139631237182
Squared Error 0.08762790909543768
Mean Root Squared Error 0.29602011603172795

#########################################################################################
# SCALE INPUTS = YES
# SCALE TARGETS = YES
#########################################################################################

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 100)

scaler_x = StandardScaler()
scaler_y = StandardScaler()

X_train = scaler_x.fit_transform(X_train)
X_test = scaler_x.transform(X_test)

y_train = scaler_y.fit_transform(y_train.reshape(-1, 1))
### NO NEED TO RESCALE since network doesnt see it
# y_test = scaler_y.transform(y_test.reshape(-1, 1))

model = MLPRegressor(random_state = 100,max_iter=250)
model.fit(X_train, y_train.ravel())
y_pred = model.predict(X_test)

### rescale predictions back to y_test scale
y_pred_rescaled_back = scaler_y.inverse_transform(y_pred.reshape(-1, 1))

print('----------------------------------------------------------------------')
print("SCALE INPUTS = YES & SCALE TARGETS = YES")
print('----------------------------------------------------------------------')
print('Mean Absolute Error', metrics.mean_absolute_error(y_test,  y_pred_rescaled_back))
print('Squared Error', metrics.mean_squared_error(y_test,  y_pred_rescaled_back))
print('Mean Root Squared Error', np.sqrt(metrics.mean_squared_error(y_test,  y_pred_rescaled_back)))
----------------------------------------------------------------------
SCALE INPUTS = YES & SCALE TARGETS = YES
----------------------------------------------------------------------
Mean Absolute Error 0.2423901612747137
Squared Error 0.09758236232324796
Mean Root Squared Error 0.3123817573470768

总结一下:

看起来,对于这个特定的架构和数据集,使用这种特定的缩放方式,你可以最快地收敛缩放的输入和缩放的目标,但在这个过程中可能会丢失一些对预测有用的信息(使用这个特定的转换)和所以你的 MAE 比你不缩放输入但缩放目标时略高。


然而,即使在这里,我认为例如改变学习率超参数(在 MLPRegressor 内)值可以帮助更快地收敛,例如当值未缩放时,但也需要对此进行试验......作为你可以看到...确实有很多细微差别。


PS 关于这个话题的一些很好的讨论