Training Variational Auto Encoder in Keras raises "InvalidArgumentError: Incompatible shapes" error

Training Variational Auto Encoder in Keras raises "InvalidArgumentError: Incompatible shapes" error

我一直在尝试让这个 VAE 整个晚上都在工作,但让 运行 一遍又一遍地陷入同一个问题。我不确定是什么问题。我尝试过删除回调、验证、更改损失函数、更改采样方法。错误(虽然在下面显示为提前停止)一直是添加到 fit 函数的最后一个参数。我不知道如何让它工作。

下面是可重现的代码,然后是我一直遇到的错误。请注意,更改批大小确实会改变错误,但不匹配的数字也会随着批大小一起减少。

import pandas as pd
from sklearn.datasets import make_blobs 
from sklearn.preprocessing import MinMaxScaler

import keras.backend as K
import tensorflow as tf

from keras.layers import Input, Dense, Lambda, Layer, Add, Multiply
from keras.models import Model, Sequential
from keras.callbacks import EarlyStopping, LearningRateScheduler
from keras.objectives import binary_crossentropy


x, labels = make_blobs(n_samples=150000, n_features=110,  centers=16, cluster_std=4.0)
scaler = MinMaxScaler()
x = scaler.fit_transform(x)
x = pd.DataFrame(x)

train = x.sample(n = 100000)
train_indexs = train.index.values
test = x[~x.index.isin(train_indexs)]
print(train.shape, test.shape)

min_dim = 2
batch_size = 1024

def sampling(args):
    mu, log_sigma = args
    eps = K.random_normal(shape=(batch_size, min_dim), mean = 0.0, stddev = 1.0)
    return mu + K.exp(0.5 * log_sigma) * eps

#Encoder
inputs = Input(shape=(x.shape[1],))
down1 = Dense(64, activation='relu')(inputs)
mu = Dense(min_dim, activation='linear')(down1)
log_sigma = Dense(min_dim, activation='linear')(down1)

#Sampling
sample_set = Lambda(sampling, output_shape=(min_dim,))([mu, log_sigma])

#decoder
up1 = Dense(64, activation='relu')(sample_set)
output = Dense(x.shape[1], activation='sigmoid')(up1)

vae = Model(inputs, output)
encoder = Model(inputs, mu)

def vae_loss(y_true, y_pred):
    recon  = binary_crossentropy(y_true, y_pred)
    kl = - 0.5 * K.mean(1 + log_sigma - K.square(mu) - K.exp(log_sigma), axis=-1)
    return recon + kl

vae.compile(optimizer='adam', loss=vae_loss)
vae.fit(train, train, shuffle = True, epochs = 1000, 
        batch_size = batch_size, validation_data = (test, test), 
        callbacks = [EarlyStopping(patience=50)])

错误:


  File "<ipython-input-2-7aa4be21434d>", line 62, in <module>
    callbacks = [EarlyStopping(patience=50)])

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\keras\engine\training.py", line 1239, in fit
    validation_freq=validation_freq)

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\keras\engine\training_arrays.py", line 196, in fit_loop
    outs = fit_function(ins_batch)

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\tensorflow\python\keras\backend.py", line 3792, in __call__
    outputs = self._graph_fn(*converted_inputs)

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1605, in __call__
    return self._call_impl(args, kwargs)

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1645, in _call_impl
    return self._call_flat(args, self.captured_inputs, cancellation_manager)

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1746, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 598, in call
    ctx=ctx)

  File "C:\Users\se01040434\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)

InvalidArgumentError:  Incompatible shapes: [672] vs. [1024]
     [[node gradients/loss/dense_5_loss/vae_loss/weighted_loss/mul_grad/Mul_1 (defined at C:\Users\se01040434\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_1515]

Function call stack:
keras_scratch_graph

您正在创建一个具有 batch_size 个样本的随机张量,其中 batch_size 是代码中的固定预设值。但是,请注意,该模型不一定需要 batch_size 个输入样本(例如,最后一批 training/test 数据的样本数量可能较少)。相反,在您的模型实现取决于批量大小的动态值的情况下,您应该使用 keras.backend.shape 函数动态获取它:

def sampling(args):
    # ...
    eps = K.random_normal(shape=(K.shape(mu)[0], min_dim)