Tensorflow Keras Estimator 在回归任务上失败,而基础模型有效

Tensorflow Keras Estimator fails on regression task while underlying model works

我将卷积神经网络用于回归任务(即网络的最后一层有一个具有线性激活的神经元)并且它工作正常(足够)。当我尝试使用与 tf.keras.estimator.model_to_estimator 打包的完全相同的模型时,估计器似乎适合但训练损失很快停止减少。裸 keras 模型的最终评估损失(每个 4 个时期后)约为 0.4(平均绝对误差),估计器约为 2.5(平均绝对误差)。

为了演示这个问题,我将我的模型以裸机和估计器打包的形式应用于 MNIST 数据集(我知道 MNIST 是一项分类任务,将其作为回归任务处理并没有多大意义.这个例子应该还是能说明我的观点。)

我很惊讶,当使用相同的方式将分类神经网络封装到估计器中时,裸keras模型及其封装的估计器版本表现同样出色(示例代码中未包含分类案例以下)。差异只发生在回归任务上。我想我要么遗漏了一些非常基本的东西,要么这种行为是由于 Tensorflow 中的一些错误造成的。

为了确保模型的输入之间存在尽可能少的差异,我将 MNIST 打包为 tf.data.Dataset 和 return 来自输入函数的输入函数,它被传递给估算器。对于裸 Keras 模型,我使用相同的输入函数获取 tf.Data.dataset 并将其直接提供给 fit 函数。

# python 3.6. Tested with tensorflow-gpu-1.14 and tensorflow-cpu-2.0
import tensorflow as tf
import numpy as np


def get_model(IM_WIDTH=28, num_color_channels=1):
    """Create a very simple convolutional neural network using a tf.keras Functional Model."""
    input = tf.keras.Input(shape=(IM_WIDTH, IM_WIDTH, num_color_channels))
    x = tf.keras.layers.Conv2D(32, 3, activation='relu')(input)
    x = tf.keras.layers.MaxPooling2D(3)(x)
    x = tf.keras.layers.Conv2D(64, 3, activation='relu')(x)
    x = tf.keras.layers.MaxPooling2D(3)(x)
    x = tf.keras.layers.Flatten()(x)
    x = tf.keras.layers.Dense(64, activation='relu')(x)
    output = tf.keras.layers.Dense(1, activation='linear')(x)
    model = tf.keras.Model(inputs=[input], outputs=[output])
    model.compile(optimizer='adam', loss="mae",
                  metrics=['mae'])
    model.summary()
    return model


def input_fun(train=True):
    """Load MNIST and return the training or test set as a tf.data.Dataset; Valid input function for tf.estimator"""
    (train_images, train_labels), (eval_images, eval_labels) = tf.keras.datasets.mnist.load_data()
    train_images = train_images.reshape((60_000, 28, 28, 1)).astype(np.float32) / 255.
    eval_images = eval_images.reshape((10_000, 28, 28, 1)).astype(np.float32) / 255.
    # train_labels = train_labels.astype(np.float32)  # these two lines don't affect behaviour.
    # eval_labels = eval_labels.astype(np.float32)
    # For a neural network with one neuron in the final layer, it doesn't seem to matter if target data is float or int.

    if train:
        dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
        dataset = dataset.shuffle(buffer_size=100).repeat(None).batch(32).prefetch(1)
    else:
        dataset = tf.data.Dataset.from_tensor_slices((eval_images, eval_labels))
        dataset = dataset.batch(32).prefetch(1)  # note: prefetching does not affect behaviour

    return dataset


model = get_model()
train_input_fn = lambda: input_fun(train=True)
eval_input_fn = lambda: input_fun(train=False)

NUM_EPOCHS, STEPS_PER_EPOCH = 4, 1875  # 1875 = number_of_train_images(=60.000)  /  batch_size(=32)
USE_ESTIMATOR = False  # change this to compare model/estimator. Estimator performs much worse for no apparent reason
if USE_ESTIMATOR:
    estimator = tf.keras.estimator.model_to_estimator(
        keras_model=model, model_dir="model_directory",
        config=tf.estimator.RunConfig(save_checkpoints_steps=200, save_summary_steps=200))

    train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=STEPS_PER_EPOCH * NUM_EPOCHS)
    eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn, throttle_secs=0)

    tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
    print("Training complete. Evaluating Estimator:")
    print(estimator.evaluate(eval_input_fn))
    # final train loss with estimator: ~2.5 (mean abs. error).
else:
    dataset = train_input_fn()
    model.fit(dataset, steps_per_epoch=STEPS_PER_EPOCH, epochs=NUM_EPOCHS)
    print("Training complete. Evaluating Keras model:")
    print(model.evaluate(eval_input_fn()))
    # final train loss with Keras model: ~0.4 (mean abs. error).

我在 https://github.com/tensorflow/tensorflow/issues/35833#issue-549185982

提交了错误报告

为了避免讨论分散在各个网站上,我将此主题标记为已解决。

我提供的答案与我在 GitHub 中提供的答案相同。

我同意你的看法,如果我们使用 TF1.15,模型和估计器的结果会有显着差异。我想 TF1.15 b运行ch 可能不会再更新了。如果有任何与安全相关的问题,那么只会更新 TF1.15 b运行ch.

我 运行 你的代码 tf-nightly。我没有看到模型和估算器的输出之间有任何显着差异。

以下是模型的输出 (USE_ESTIMATOR = False)

Training complete. Evaluating Keras model:
313/313 [==============================] - 2s 7ms/step - loss: 0.4018 - mae: 0.4021
[0.4018059968948364, 0.4020615816116333]

以下是估算器的输出 (USE_ESTIMATOR = True)

Training complete. Evaluating Estimator:
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-03-18T23:15:15Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_directory/model.ckpt-7500
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 2.14818s
INFO:tensorflow:Finished evaluation at 2020-03-18-23:15:17
INFO:tensorflow:Saving dict for global step 7500: global_step = 7500, loss = 0.39566746, mae = 0.39566746
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 7500: model_directory/model.ckpt-7500
{'loss': 0.39566746, 'mae': 0.39566746, 'global_step': 7500}