如何在不指定目标的情况下在 Keras model.fit 中使用 tf.Dataset?

How to use tf.Dataset in Keras model.fit without specifying targets?

我想使用具有 Keras 功能的 AutoEncoder 模型 API。我还想使用 tf.data.Dataset 作为模型的输入管道。但是,有一个限制,我只能将数据集传递给 keras.model.fit 元组 (inputs, targets) accroding to docs:

Input data. It could be: A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

所以问题来了:我可以通过 tf.data.Dataset 而不重复像 (inputs, inputs)(inputs, None) 这样的输入吗?如果我做不到,重复输入是否会使我的模型的 GPU 内存翻倍?

您可以使用 map() 来 return 您的输入两次:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Conv2DTranspose, Reshape
from functools import partial

(xtrain, _), (xtest, _) = tf.keras.datasets.mnist.load_data()

ds = tf.data.Dataset.from_tensor_slices(
    tf.expand_dims(tf.concat([xtrain, xtest], axis=0), axis=-1))

ds = ds.take(int(1e4)).batch(4).map(lambda x: (x/255, x/255))

custom_convolution = partial(Conv2D, kernel_size=(3, 3),
                             strides=(1, 1),
                             activation='relu',
                             padding='same')
custom_pooling = partial(MaxPool2D, pool_size=(2, 2))

conv_encoder = Sequential([
    custom_convolution(filters=16, input_shape=(28, 28, 1)),
    custom_pooling(),
    custom_convolution(filters=32),
    custom_pooling(),
    custom_convolution(filters=64),
    custom_pooling()
    ])

# conv_encoder(next(iter(ds))[0].numpy().astype(float)).shape
custom_transpose = partial(Conv2DTranspose,
                           padding='same',
                           kernel_size=(3, 3),
                           activation='relu',
                           strides=(2, 2))

conv_decoder = Sequential([
    custom_transpose(filters=32, input_shape=(3, 3, 64), padding='valid'),
    custom_transpose(filters=16),
    custom_transpose(filters=1, activation='sigmoid'),
    Reshape(target_shape=[28, 28, 1])
    ])

conv_autoencoder = Sequential([
    conv_encoder,
    conv_decoder
    ])

conv_autoencoder.compile(loss='binary_crossentropy', optimizer='adam')

history = conv_autoencoder.fit(ds)
2436/2500 [============================>.] - ETA: 0s - loss: 0.1282
2446/2500 [============================>.] - ETA: 0s - loss: 0.1280
2456/2500 [============================>.] - ETA: 0s - loss: 0.1279
2466/2500 [============================>.] - ETA: 0s - loss: 0.1278
2476/2500 [============================>.] - ETA: 0s - loss: 0.1277
2487/2500 [============================>.] - ETA: 0s - loss: 0.1275
2497/2500 [============================>.] - ETA: 0s - loss: 0.1274
2500/2500 [==============================] - 14s 6ms/step - loss: 0.1273

“重复输入使我的模型的 GPU 内存翻倍?”;通常,数据集管道 运行 在 CPU 上,而不是在 GPU 上。

对于您的 AutoEncoder 模型,如果您想使用只包含没有标签的示例的数据集,您可以使用自定义训练:

def loss(model, x):

    y_ = model(x, training=True)           # use x as input

   return loss_object(y_true=x, y_pred=y_) # use x as label (y_true)

with tf.GradientTape() as tape:
   loss_value = loss(model, inputs)

如果需要使用fit()方法,可以继承keras.Model并覆盖train_step()方法 link 。 (我没有验证这个代码):

class CustomModel(keras.Model):
def train_step(self, data):

    x = data
    y = data  # the same data as label ++++

    with tf.GradientTape() as tape:
        y_pred = self(x, training=True)  # Forward pass
      
        loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)

    # Compute gradients
    trainable_vars = self.trainable_variables
    gradients = tape.gradient(loss, trainable_vars)
    # Update weights
    self.optimizer.apply_gradients(zip(gradients, trainable_vars))
    # Update metrics (includes the metric that tracks the loss)
    self.compiled_metrics.update_state(y, y_pred)
    # Return a dict mapping metric names to current value
    return {m.name: m.result() for m in self.metrics}

在 TensorFlow 2.4 中,我有一个数据集 returns 一个元素的元组,即 (inputs,),训练效果很好。唯一的警告当然是您不能将损失或指标传递给 model.compile,而必须在模型中的某处使用 add_lossadd_metric API。