神经网络输出的反规范化

Denormalization of output from neural network

早上好,我使用了 MinMax 归一化来归一化我的数据集,包括特征和标签。我的问题是,将标签也标准化是正确的吗?如果是,我如何对神经网络的输出(我用标准化的测试集预测的输出)进行反标准化?

很遗憾我无法上传数据集,但它由 18 个特征和 1 个标签组成。它是一个回归任务,特征和标签是物理量。

所以问题是 y_train_pred e y_test_pred 介于 0 和 1 之间。我如何预测 "real value"?如果您发现其他错误请告诉我。

谢谢。

我使用的代码写在下面

    dataset = pd.read_csv('DataSet.csv', decimal=',', delimiter = ";")

label = dataset.iloc[:,-1]
features = dataset.drop(columns = ['Label'])

features = features[best_features]

X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = 0.25, random_state = 1, shuffle = True)

y_test2 = y_test1.to_frame()
y_train2 = y_train1.to_frame()

scaler1 = preprocessing.MinMaxScaler()
scaler2 = preprocessing.MinMaxScaler()
X_train = scaler1.fit_transform(X_train1)
X_test = scaler2.fit_transform(X_test1)

scaler3 = preprocessing.MinMaxScaler()
scaler4 = preprocessing.MinMaxScaler()
y_train = scaler3.fit_transform(y_train2)
y_test = scaler4.fit_transform(y_test2)

optimizer = tf.keras.optimizers.Adamax(lr=0.001)
model = Sequential()

model.add(Dense(80, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='random_normal'))
model.add(Dropout(0.15))
model.add(Dense(120, activation = 'relu',kernel_initializer='random_normal'))
model.add(Dropout(0.15))
model.add(Dense(80, activation = 'relu',kernel_initializer='random_normal'))

model.add(Dense(1,activation = 'linear'))
model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse'])

history = model.fit(X_train, y_train, epochs = 300,
                    validation_split = 0.1, shuffle=False,   batch_size=120
                    )
history_dict = history.history

loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']

y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)

你应该反规范化,这样你就可以对你的神经网络进行真实世界的预测,而不是 0-1 之间的数字

最小 - 最大归一化定义为:

z = (x - min)/(max - min)

z为归一化值,x为标签值,max为最大x值,min为最小x值。因此,如果我们有 z、min 和 max,我们可以按如下方式解析 x:

x = z(max - min) + min

因此,在规范化数据之前,如果标签是连续的,请为标签的最大值和最小值定义变量。然后在你得到你的 pred 值之后,你可以使用以下函数:

y_max_pre_normalize = max(label)
y_min_pre_normalize = min(label) 

def denormalize(y):
    final_value = y(y_max_pre_normalize - y_min_pre_normalize) + y_min_pre_normalize 
    return final_value

并将此函数应用到您的 y_test/y_pred 以获得相应的值。

You can use this link here to better visualize this.