Tensorflow Hub Module 的可训练变量在训练期间不会更新

Tensorflow Hub Module's trainable variables are not updated during training

我的问题与此处提出的问题非常相似:https://github.com/tensorflow/hub/issues/269。但是这些问题仍然没有答案,所以我会在这里问。重现步骤:

tensorflow 1.14.0 tensorflow-hub 0.5.0 Python 3.7.4 Windows 10

这里是重现问题的示例笔记本: https://colab.research.google.com/drive/1PKUyoQRP3othu6cu7v7N7yn8K2pjkuKP

  1. 加载一个 tensor_hub Inception 3 模块作为可训练的:

    module_spec = hub.load_module_spec('https://tfhub.dev/google/imagenet/inception_v3/feature_vector/3')
    height, width = hub.get_expected_image_size(module_spec)
        with tf.Graph().as_default() as graph:
            resized_input_tensor =  tf.compat.v1.placeholder(tf.float32, [None, height, width, 3])
            module = hub.Module(module_spec, trainable=True, tags={"train"})  
            bottleneck_tensor = module(inputs=dict(images=resized_input_tensor, batch_norm_momentum=0.997),signature="image_feature_vector_with_bn_hparams")  
  1. 将此时创建的所有Trainable/Model/Global个变量保存到单独的'base model'个列表中(3个列表) 变量示例: base_model trainable_variables vars: 188, ['module/InceptionV3/Conv2d_1a_3x3/weights:0', 'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/beta:0'.. base_model model_variables vars: 188, ['module/InceptionV3/Conv2d_1a_3x3/BatchNorm/moving_mean:0', 'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/moving_variance:0 base_model 变量 vars: 0, [] #empty list

  2. 在模型之上添加自定义分类层:


    batch_size, previous_tensor_size = bottleneck_tensor.get_shape().as_list()
    ground_truth_input = tf.compat.v1.placeholder(tf.int64, [batch_size], name='GroundTruthInput')
    initial_value = tf.random.truncated_normal([previous_tensor_size, class_count], stddev=0.001)
    layer_weights = tf.Variable(initial_value, name='final_weights')
    layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases')
    logits = tf.matmul(hidden_layer, layer_weights) + layer_biases
    final_tensor = tf.nn.softmax(logits, name=final_tensor_name)

  1. 同样,将所有新添加的变量名放入 3 个新的不同 'Custom' 列表中:

    自定义 trainable_variables 变量:2,['final_weights:0','final_biases:0'] 自定义 model_variables 变量:0,[] 自定义变量 vars: 0, []

  2. 添加列车运行。因为基础模型有批量归一化,我们必须关心更新操作。这就是为什么我使用 tf.contrib.training.create_train_op:

    cross_entropy_all = tf.compat.v1.losses.sparse_softmax_cross_entropy(labels=ground_truth_input, logits=logits)
    optimizer = tf.compat.v1.train.AdamOptimizer()

    #the update ops are set to the contents of the tf.GraphKeys.UPDATE_OPS collection.
    #variables to train will default to all tf.compat.v1.trainable_variables().
    train_step = tf.contrib.training.create_train_op(cross_entropy_mean, optimizer)
    1. 同样,将所有新添加的变量名放入 3 个新的不同 'Optimizer' 列表中: 优化器 trainable_variables 变量:0,[] 优化器 model_variables 变量:0,[] 优化器变量 vars: 383, ['global_step:0', 'beta1_power:0', 'beta2_power:0', 'module/InceptionV3/Conv2d_1a_3x3/weights/Adam:0', 'module/InceptionV3/Conv2d_1a_3x3/weights/Adam_1:0', 'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/beta/Adam:0', 'module/InceptionV3/Conv2d_1a_3x3/BatchNorm/beta/Adam_1:0', 'module/InceptionV3/Conv2d_2a_3x3/weights/Adam:0'、'module/InceptionV3/Conv2d_2a_3x3/weights/Adam_1:0'、'module/InceptionV3/Conv2d_2a_3x3/BatchNorm/beta/Adam:0'、...

现在进行常规训练:


    with tf.compat.v1.Session(graph=graph) as sess:
        # Initialize all weights: for the module to their pretrained values,
        # and for the newly added retraining layer to random initial values.
        init = tf.compat.v1.global_variables_initializer()
        sess.run(init)

        #dump the checkssum for all the variables lists collected during graph building

        for i in range(1000):
            # Get a batch of input resized images values, calculated fresh
            (train_data, train_ground_truth) = get_random_batch_data(sess, image_lists....)

            #dump the checksum for all the variables lists collected during graph building


            # Feed the input placeholder and ground truth into the graph, and run a training
            # step.
            sess.run([train_step], feed_dict = {
                resized_input_tensor: train_data,
                ground_truth_input: train_ground_truth})

            #dump now again the checksum for all the variables lists collected during graph building

因此,在每个训练步骤之后,仅针对两个变量列表(自定义可训练和优化器全局)更改校验和:


    base_model trainable_variables, 2697202.0, cf4682249fc1f48e9a346149f84e503d unchanged
    base_model model_variables,
        2936996.0, 6f995f5f0f032604a49a96ceec576cf7 unchanged
    base_model variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    custom trainable_variables, -0.7915199408307672, 889c333a56b9496d412eacdcbeb3bef1 **changed**
    custom model_variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    custom variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    optimizer trainable_variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    optimizer model_variables, 0, d41d8cd98f00b204e9800998ecf8427e unchanged
    optimizer variables,
        5580902.81437762, d2cb2d4b253a1c12452f560eea35ac42 **changed**

所以,问题是,为什么基础模型的可训练变量没有改变?? 它们是 BatchNorm/moving_mean、BatchNorm/moving_variance、Conv2d_1a_3x3/weights,绝对应该在训练期间更新。更重要的是,moving_variance 也应该更改,因为 UPDATE_OPS 包含在 tf.contrib.training.create_train_op 调用中作为训练步骤的依赖项。我检查了 UPDATE_OPS 列表,它包含如下有效值: 更新操作: tf.Operation 'module_apply_image_feature_vector_with_bn_hparams/InceptionV3/InceptionV3/Conv2d_1a_3x3/BatchNorm/AssignMovingAvg/AssignSubVariableOp' type=AssignSubVariableOp>,

好的,深入调试一个issue后发现问题出在: 仅仅从全局变量列表中获取变量并使用 eval() 获取它的值是不够的:它会 return 一些值但它不是当前值(至少这是导入模型的变量发生的情况与 dtype=资源).

要计算当前值,我们必须首先使用 variable.value() 或 variable.read_value()[= 获取值张量19=] 并为它做 eval()(对于 returned 'value' 张量)。

这解决了问题。