"loss : nan" 正在为表格数据训练 "convolution 1D" 神经网络
"loss : nan" in training of "convolution 1D" neural Network for tabular data
我需要为表格数据集上的多 class class 化实现 CNN。
我的数据有 X_train.shape = (1534185, 81, 1) 和 Y_train = (1534185, 11)
这是我的数据集中的样本
DataSetImage
我试图规范化数据,但值太大而无法添加并存储在 float64 中。
我实现的CNN模型如下
batchSize = X_train.shape[0]
length = X_train.shape[1]
channel = X_train.shape[2]
n_outputs = y_train.shape[1]
#Initialising the CNN
model = Sequential()
#1.Multiple convolution and max pooling
model.add(Convolution1D(filters=64, kernel_size=3, activation="relu", input_shape=(length, channel)))
model.add(MaxPooling1D(strides=4))
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(Convolution1D(filters= 32, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(strides=4))
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(Convolution1D(filters= 16, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(strides=4))
model.add(Dropout(0.1))
model.add(BatchNormalization())
#2.Flattening
model.add(Dropout(0.2))
model.add(Flatten())
#3.Full Connection
model.add(Dense(30, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
如果我尝试更改内核大小,则会出现以下错误
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling1d_103/MaxPool' (op: 'MaxPool') with input shapes: [?,1,1,16].
当我尝试使用下面的代码训练我的模型时,在损失 = Nan 的情况下我的准确性没有提高
history = model.fit(
X_train,
y_train,
batch_size=1000,
epochs=2,
validation_data=(X_test, y_test),
)
损失:nan
Error:
Train on 1534185 samples, validate on 657509 samples
Epoch 1/2
956000/1534185 [=================>............] - ETA: 1:44 - loss: nan - accuracy: 0.0101
需要你的帮助
尝试检查 inf 值并将其替换为 nan 并重试
X_train.replace([np.inf, -np.inf], np.nan,inplace=True)
X_train = X_train.fillna(0)
我需要为表格数据集上的多 class class 化实现 CNN。 我的数据有 X_train.shape = (1534185, 81, 1) 和 Y_train = (1534185, 11)
这是我的数据集中的样本
DataSetImage
我试图规范化数据,但值太大而无法添加并存储在 float64 中。
我实现的CNN模型如下
batchSize = X_train.shape[0]
length = X_train.shape[1]
channel = X_train.shape[2]
n_outputs = y_train.shape[1]
#Initialising the CNN
model = Sequential()
#1.Multiple convolution and max pooling
model.add(Convolution1D(filters=64, kernel_size=3, activation="relu", input_shape=(length, channel)))
model.add(MaxPooling1D(strides=4))
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(Convolution1D(filters= 32, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(strides=4))
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(Convolution1D(filters= 16, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(strides=4))
model.add(Dropout(0.1))
model.add(BatchNormalization())
#2.Flattening
model.add(Dropout(0.2))
model.add(Flatten())
#3.Full Connection
model.add(Dense(30, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
如果我尝试更改内核大小,则会出现以下错误
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling1d_103/MaxPool' (op: 'MaxPool') with input shapes: [?,1,1,16].
当我尝试使用下面的代码训练我的模型时,在损失 = Nan 的情况下我的准确性没有提高
history = model.fit(
X_train,
y_train,
batch_size=1000,
epochs=2,
validation_data=(X_test, y_test),
)
损失:nan
Error:
Train on 1534185 samples, validate on 657509 samples
Epoch 1/2
956000/1534185 [=================>............] - ETA: 1:44 - loss: nan - accuracy: 0.0101
需要你的帮助
尝试检查 inf 值并将其替换为 nan 并重试
X_train.replace([np.inf, -np.inf], np.nan,inplace=True)
X_train = X_train.fillna(0)