问:ValueError Keras 期望 conv2d_14_input 具有形状 (3, 12, 1) 但得到形状为 (3, 12, 6500) 的数组?
Q: ValueError Keras expected conv2d_14_input to have shape (3, 12, 1) but got array with shape (3, 12, 6500)?
我正在 Keras 2.1.0 中 非图像数据 在 Window 10 上构建 CNN。
我的输入特征是一个非负数的 3x12 矩阵,我的输出是一个长度为 6x1 的二进制多标签向量
我 运行 遇到了这个错误 预期 conv2d_14_input 的形状为 (3, 12, 1) 但得到的数组的形状为 (3, 12, 6500)
下面是我的代码
import tensorflow as tf
from scipy.io import loadmat
import numpy as np
from tensorflow.keras.layers import BatchNormalization
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten
reshape_channel_train = loadmat('reshape_channel_train')
reshape_channel_test = loadmat('reshape_channel_test.mat')
reshape_label_train = loadmat('reshape_label_train')
reshape_label_test = loadmat('reshape_label_test')
X_train = reshape_channel_train['store_train']
X_test = reshape_channel_test['store_test']
X_train = np.expand_dims(X_train,axis = 0)
X_test = np.expand_dims(X_test, axis = 0)
Y_train = reshape_label_train['label_train']
Y_test = reshape_label_test['label_test']
classifier = Sequential()
classifier.add(Conv2D(8, kernel_size=(3,3) , input_shape=(3, 12, 1), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Conv2D(8, kernel_size=(3,3), input_shape=(3, 12, 1), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Flatten())
classifier.add(Dense(8, activation='relu'))
classifier.add(Dense(6, activation='sigmoid'))
classifier.compile(optimizer='nadam', loss='binary_crossentropy', metrics=['accuracy'])
history = classifier.fit(X_train, Y_train, batch_size = 32, epochs=100,
validation_data=(X_test, Y_test), verbose=2)
经过一番搜索,我使用了维度扩展技巧,但似乎不起作用
X_train = np.expand_dims(X_train,axis = 0)
X_test = np.expand_dims(X_test, axis = 0)
包含 6500 个训练实例的 X_train 变量是从尺寸为 3x12x6500 的 Matlab .mat 文件中加载的。
其中每个训练实例都是一个 3x12 矩阵。
在使用expand_dim技巧之前,第k个训练样本可以被X_train调用[:,:,k] 和 X_train[:,:,k].shape 会 return (3,12)。另外 X_train.shape 会 return (3, 12, 6500)
After 使用 expand_dim 欺骗命令 X_train[:,:,k].shape会 return (1, 3, 6500)
请帮我解决这个问题!
谢谢
您错误地管理了您的数据。 Conv2D
层接受这种格式的数据 (n_sample, height, width, channels)
,在您的情况下(对于您的 X_train)变为 (6500,3,12,1)
。你需要简单地重新处理这个案例
# create data as in your matlab data
n_class = 6
n_sample = 6500
X_train = np.random.uniform(0,1, (3,12,n_sample)) # (3,12,n_sample)
Y_train = tf.keras.utils.to_categorical(np.random.randint(0,n_class, n_sample)) # (n_sample, n_classes)
# reshape your data for conv2d
X_train = X_train.transpose(2,0,1) # (n_sample,3,12)
X_train = np.expand_dims(X_train, -1) # (n_sample,3,12,1)
classifier = Sequential()
classifier.add(Conv2D(8, kernel_size=(3,3) , input_shape=(3, 12, 1), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Conv2D(8, kernel_size=(3,3), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Flatten())
classifier.add(Dense(8, activation='relu'))
classifier.add(Dense(n_class, activation='softmax'))
classifier.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['accuracy'])
history = classifier.fit(X_train, Y_train, batch_size = 32, epochs=2, verbose=2)
# get predictions
pred = np.argmax(classifier.predict(X_train), 1)
我还使用 softmax
激活和 categorical_crossentropy
这更适合多类问题,但您也可以修改它。记得对测试数据也应用相同的数据操作
你需要传递 data_format="channels_last" 参数,因为你的频道终于有了
你试试这个:
x_train=x_train.reshape((6500,3,12,1))
x_test=x_test.reshape((-1,3,12,1))
and in each of conv2d layer conv2D(<other args>, data_format="channels_last")
我正在 Keras 2.1.0 中 非图像数据 在 Window 10 上构建 CNN。
我的输入特征是一个非负数的 3x12 矩阵,我的输出是一个长度为 6x1 的二进制多标签向量
我 运行 遇到了这个错误 预期 conv2d_14_input 的形状为 (3, 12, 1) 但得到的数组的形状为 (3, 12, 6500)
下面是我的代码
import tensorflow as tf
from scipy.io import loadmat
import numpy as np
from tensorflow.keras.layers import BatchNormalization
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten
reshape_channel_train = loadmat('reshape_channel_train')
reshape_channel_test = loadmat('reshape_channel_test.mat')
reshape_label_train = loadmat('reshape_label_train')
reshape_label_test = loadmat('reshape_label_test')
X_train = reshape_channel_train['store_train']
X_test = reshape_channel_test['store_test']
X_train = np.expand_dims(X_train,axis = 0)
X_test = np.expand_dims(X_test, axis = 0)
Y_train = reshape_label_train['label_train']
Y_test = reshape_label_test['label_test']
classifier = Sequential()
classifier.add(Conv2D(8, kernel_size=(3,3) , input_shape=(3, 12, 1), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Conv2D(8, kernel_size=(3,3), input_shape=(3, 12, 1), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Flatten())
classifier.add(Dense(8, activation='relu'))
classifier.add(Dense(6, activation='sigmoid'))
classifier.compile(optimizer='nadam', loss='binary_crossentropy', metrics=['accuracy'])
history = classifier.fit(X_train, Y_train, batch_size = 32, epochs=100,
validation_data=(X_test, Y_test), verbose=2)
经过一番搜索,我使用了维度扩展技巧,但似乎不起作用
X_train = np.expand_dims(X_train,axis = 0)
X_test = np.expand_dims(X_test, axis = 0)
包含 6500 个训练实例的 X_train 变量是从尺寸为 3x12x6500 的 Matlab .mat 文件中加载的。
其中每个训练实例都是一个 3x12 矩阵。
在使用expand_dim技巧之前,第k个训练样本可以被X_train调用[:,:,k] 和 X_train[:,:,k].shape 会 return (3,12)。另外 X_train.shape 会 return (3, 12, 6500)
After 使用 expand_dim 欺骗命令 X_train[:,:,k].shape会 return (1, 3, 6500)
请帮我解决这个问题! 谢谢
您错误地管理了您的数据。 Conv2D
层接受这种格式的数据 (n_sample, height, width, channels)
,在您的情况下(对于您的 X_train)变为 (6500,3,12,1)
。你需要简单地重新处理这个案例
# create data as in your matlab data
n_class = 6
n_sample = 6500
X_train = np.random.uniform(0,1, (3,12,n_sample)) # (3,12,n_sample)
Y_train = tf.keras.utils.to_categorical(np.random.randint(0,n_class, n_sample)) # (n_sample, n_classes)
# reshape your data for conv2d
X_train = X_train.transpose(2,0,1) # (n_sample,3,12)
X_train = np.expand_dims(X_train, -1) # (n_sample,3,12,1)
classifier = Sequential()
classifier.add(Conv2D(8, kernel_size=(3,3) , input_shape=(3, 12, 1), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Conv2D(8, kernel_size=(3,3), padding="same"))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(Flatten())
classifier.add(Dense(8, activation='relu'))
classifier.add(Dense(n_class, activation='softmax'))
classifier.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['accuracy'])
history = classifier.fit(X_train, Y_train, batch_size = 32, epochs=2, verbose=2)
# get predictions
pred = np.argmax(classifier.predict(X_train), 1)
我还使用 softmax
激活和 categorical_crossentropy
这更适合多类问题,但您也可以修改它。记得对测试数据也应用相同的数据操作
你需要传递 data_format="channels_last" 参数,因为你的频道终于有了
你试试这个:
x_train=x_train.reshape((6500,3,12,1))
x_test=x_test.reshape((-1,3,12,1))
and in each of conv2d layer conv2D(<other args>, data_format="channels_last")