Conv2D 与 GAN 中的层不兼容

Conv2D is incompatible with the layer in a GAN

我正在使用 Mnist 数据集开发 GAN。我开发了生成器和鉴别器。但是,当我将它们组合在一起时,出现此错误:Input 0 of layer "conv2d" is incompatible with the layer: expected axis -1 of input shape to have value 1, but received input with shape (None, 57, 57, 1024)。有谁知道为什么会这样?我还需要添加其他内容吗?

预处理:

(x_train, _), (x_test, _) = mnist.load_data()

x_train = x_train.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000, 28, 28, 1)
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
img_rows, img_cols = 28, 28
channels = 1
img_shape = (img_rows, img_cols, channels)

发电机:

def generator():
   model = Sequential()
   model.add(Conv2DTranspose(32, (3,3), strides=(2, 2), activation='relu', use_bias=False, 
   input_shape=img_shape))
   model.add(BatchNormalization(momentum=0.3))
   model.add(Conv2DTranspose(64,(3,3),strides=(2,2), activation='relu', padding='same', 
   use_bias=False)) 
   model.add(MaxPooling2D(pool_size=(2, 2)))
   model.add(LeakyReLU(alpha=0.2))

   model.add(Conv2DTranspose(64,(3,3),strides=(2,2), activation='relu', padding='same', 
   use_bias=False))
   model.add(MaxPooling2D(pool_size=(2, 2)))
   model.add(Dropout(0.5))
   model.add(BatchNormalization(momentum=0.3))
   model.add(LeakyReLU(alpha=0.2))

   model.add(Dense(512, activation=LeakyReLU(alpha=0.2)))
   model.add(BatchNormalization(momentum=0.7))
   model.add(Dense(1024, activation='tanh'))

   model.summary()
   model.compile(loss=keras.losses.binary_crossentropy, optimizer=Adam(learning_rate=0.02))
   return model

generator = generator()

判别器:

def discriminator():
    model = Sequential()
    model.add(Conv2D(32, (5,5), strides=(2, 2), activation='relu', use_bias=False, 
    input_shape=img_shape))
    model.add(BatchNormalization(momentum=0.3))
    model.add(Conv2D(64,(5,5),strides=(2,2), activation='relu', padding='same', 
    use_bias=False))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(LeakyReLU(alpha=0.2))

    model.add(Conv2D(64,(5,5),strides=(2,2), activation='relu', padding='same', 
    use_bias=False))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.5))
    model.add(BatchNormalization(momentum=0.3))
    model.add(LeakyReLU(alpha=0.2))

    model.add(Dense(512, activation=LeakyReLU(alpha=0.2)))
    model.add(BatchNormalization(momentum=0.7))
    model.add(Dense(1024, activation='tanh'))

    model.summary()
    model.compile(loss=keras.losses.binary_crossentropy, optimizer=Adam(learning_rate=0.02)) 

    return model

discriminator = discriminator()

两种模型结合(我得到错误的地方):

def GAN(generator, discriminator):
   model = Sequential()
   model.add(generator)
   discriminator.trainable = False
   model.add(discriminator)

   model.summary()
   model.compile()

   return model

gan = GAN(generator, discriminator)

您的生成器需要生成图像,因此生成器的输出形状必须与图像的形状相同。激活还必须与图像中的范围兼容。我不认为你的图像从 -1 到 +1,所以你不应该使用“tanh”。您必须选择与图像兼容的激活。

最后一个生成器层:

Dense(img_shape[-1], ...)

你的鉴别器需要判断图像是真还是假,因此它的输出必须只有一个值,0 或 1。

最后一个鉴别器层:

Dense(1, activation="sigmoid")