在 Keras 中规范化神经网络的验证集
Normalize the Validation Set for a Neural Network in Keras
所以,我明白归一化对于训练神经网络很重要。
我也知道我必须使用训练集中的参数对验证集和测试集进行标准化(例如,参见此讨论:https://stats.stackexchange.com/questions/77350/perform-feature-normalization-before-or-within-model-validation)
我的问题是:如何在 Keras 中执行此操作?
我目前正在做的是:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping
def Normalize(data):
mean_data = np.mean(data)
std_data = np.std(data)
norm_data = (data-mean_data)/std_data
return norm_data
input_data, targets = np.loadtxt(fname='data', delimiter=';')
norm_input = Normalize(input_data)
model = Sequential()
model.add(Dense(25, input_dim=20, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
early_stopping = EarlyStopping(monitor='val_acc', patience=50)
model.fit(norm_input, targets, validation_split=0.2, batch_size=15, callbacks=[early_stopping], verbose=1)
但在这里,我首先规范化数据w.r.t。整个数据集和 then 拆分了验证集,根据上述讨论,这是错误的。
从训练集(training_mean 和 training_std)中保存均值和标准差没什么大不了的,但是我如何应用 [=29 的归一化=] 和 training_std 分别在验证集上?
在使用 sklearn.model_selection.train_test_split
拟合模型之前,您可以手动将数据拆分为训练和测试数据集。然后,根据训练数据的均值和标准差对训练和测试数据进行归一化。最后,使用 validation_data
参数调用 model.fit
。
代码示例
import numpy as np
from sklearn.model_selection import train_test_split
data = np.random.randint(0,100,200).reshape(20,10)
target = np.random.randint(0,1,20)
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2)
def Normalize(data, mean_data =None, std_data =None):
if not mean_data:
mean_data = np.mean(data)
if not std_data:
std_data = np.std(data)
norm_data = (data-mean_data)/std_data
return norm_data, mean_data, std_data
X_train, mean_data, std_data = Normalize(X_train)
X_test, _, _ = Normalize(X_test, mean_data, std_data)
model.fit(X_train, y_train, validation_data=(X_test,y_test), batch_size=15, callbacks=[early_stopping], verbose=1)
以下代码正是您想要的:
import numpy as np
def normalize(x_train, x_test):
mu = np.mean(x_train, axis=0)
std = np.std(x_train, axis=0)
x_train_normalized = (x_train - mu) / std
x_test_normalized = (x_test - mu) / std
return x_train_normalized, x_test_normalized
然后你可以像这样在keras中使用它:
from keras.datasets import boston_housing
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
x_train, x_test = normalize(x_train, x_test)
丰益国际的回答不正确
所以,我明白归一化对于训练神经网络很重要。
我也知道我必须使用训练集中的参数对验证集和测试集进行标准化(例如,参见此讨论:https://stats.stackexchange.com/questions/77350/perform-feature-normalization-before-or-within-model-validation)
我的问题是:如何在 Keras 中执行此操作?
我目前正在做的是:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping
def Normalize(data):
mean_data = np.mean(data)
std_data = np.std(data)
norm_data = (data-mean_data)/std_data
return norm_data
input_data, targets = np.loadtxt(fname='data', delimiter=';')
norm_input = Normalize(input_data)
model = Sequential()
model.add(Dense(25, input_dim=20, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
early_stopping = EarlyStopping(monitor='val_acc', patience=50)
model.fit(norm_input, targets, validation_split=0.2, batch_size=15, callbacks=[early_stopping], verbose=1)
但在这里,我首先规范化数据w.r.t。整个数据集和 then 拆分了验证集,根据上述讨论,这是错误的。
从训练集(training_mean 和 training_std)中保存均值和标准差没什么大不了的,但是我如何应用 [=29 的归一化=] 和 training_std 分别在验证集上?
在使用 sklearn.model_selection.train_test_split
拟合模型之前,您可以手动将数据拆分为训练和测试数据集。然后,根据训练数据的均值和标准差对训练和测试数据进行归一化。最后,使用 validation_data
参数调用 model.fit
。
代码示例
import numpy as np
from sklearn.model_selection import train_test_split
data = np.random.randint(0,100,200).reshape(20,10)
target = np.random.randint(0,1,20)
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2)
def Normalize(data, mean_data =None, std_data =None):
if not mean_data:
mean_data = np.mean(data)
if not std_data:
std_data = np.std(data)
norm_data = (data-mean_data)/std_data
return norm_data, mean_data, std_data
X_train, mean_data, std_data = Normalize(X_train)
X_test, _, _ = Normalize(X_test, mean_data, std_data)
model.fit(X_train, y_train, validation_data=(X_test,y_test), batch_size=15, callbacks=[early_stopping], verbose=1)
以下代码正是您想要的:
import numpy as np
def normalize(x_train, x_test):
mu = np.mean(x_train, axis=0)
std = np.std(x_train, axis=0)
x_train_normalized = (x_train - mu) / std
x_test_normalized = (x_test - mu) / std
return x_train_normalized, x_test_normalized
然后你可以像这样在keras中使用它:
from keras.datasets import boston_housing
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
x_train, x_test = normalize(x_train, x_test)
丰益国际的回答不正确