使用带有输入和 conv2d 层的非常简单的网络的 Vale Error
Vale Error using very simple Network with Input and conv2d layers
使用以下代码在随机数据上训练模型只是为了尝试一些实现会引发“简单”错误,我无法理解:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
def create_dataset(size=(224, 224)):
array = np.random.randint(-128,127, (10, size[0], size[1], 3))
array2 = np.argmax(array, axis=-1)
array = np.split(array,size_dataset, axis=0)
array2 = np.split(array2,size_dataset, axis=0)
y= np.random.randint(-1,1, size_dataset)
return array, array2
def create_model():
encoder_input = keras.Input(shape=(224,224,3), name="start")
encoder_output=keras.layers.Conv2D(filters=64,kernel_size=(3,3), activation='relu')(encoder_input)
model=tf.keras.Model(inputs=encoder_input, outputs=encoder_output)
return model
if __name__=='__main__':
data,target=create_dataset()
model=create_model()
model.compile(optimizer='adam',loss=tf.keras.losses.BinaryCrossentropy(),metrics='accuracy')
model.fit(x=data, y=target, batch_size=16, epochs=2)
错误如下:
ValueError: Dimensions must be equal, but are 224 and 222 for '{{node
binary_crossentropy/mul}} = Mul[T=DT_FLOAT](Cast,
binary_crossentropy/Log)' with input shapes: [?,224,224],
[?,222,222,64].
但是这里有什么问题。似乎输入层和 conv2d 不能正确地协同工作。也许,只是来晚了:-)
提前致谢
您需要将 padding='same'
传递给 Conv2D
。默认情况下,它使用 padding='valid'
,这意味着输出大小会减去内核大小减一。这里https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215有详细说明
使用以下代码在随机数据上训练模型只是为了尝试一些实现会引发“简单”错误,我无法理解:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
def create_dataset(size=(224, 224)):
array = np.random.randint(-128,127, (10, size[0], size[1], 3))
array2 = np.argmax(array, axis=-1)
array = np.split(array,size_dataset, axis=0)
array2 = np.split(array2,size_dataset, axis=0)
y= np.random.randint(-1,1, size_dataset)
return array, array2
def create_model():
encoder_input = keras.Input(shape=(224,224,3), name="start")
encoder_output=keras.layers.Conv2D(filters=64,kernel_size=(3,3), activation='relu')(encoder_input)
model=tf.keras.Model(inputs=encoder_input, outputs=encoder_output)
return model
if __name__=='__main__':
data,target=create_dataset()
model=create_model()
model.compile(optimizer='adam',loss=tf.keras.losses.BinaryCrossentropy(),metrics='accuracy')
model.fit(x=data, y=target, batch_size=16, epochs=2)
错误如下:
ValueError: Dimensions must be equal, but are 224 and 222 for '{{node binary_crossentropy/mul}} = Mul[T=DT_FLOAT](Cast, binary_crossentropy/Log)' with input shapes: [?,224,224], [?,222,222,64].
但是这里有什么问题。似乎输入层和 conv2d 不能正确地协同工作。也许,只是来晚了:-)
提前致谢
您需要将 padding='same'
传递给 Conv2D
。默认情况下,它使用 padding='valid'
,这意味着输出大小会减去内核大小减一。这里https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215有详细说明