tensorflow,根据来自两个模型(编码器、解码器)的权重计算梯度

tensorflow, compute gradients with respect to weights that come from two models (encoder, decoder)

我有一个编码器模型和一个解码器模型 (RNN)。 我想计算梯度并更新权重。 我对到目前为止在网上看到的内容感到有些困惑。 哪个块是最佳实践?这两个选项之间有什么区别吗?梯度似乎在块1收敛得更快,我不知道为什么?

# BLOCK 1, in two operations
encoder_gradients,decoder_gradients = tape.gradient(loss,[encoder_model.trainable_variables,decoder_model.trainable_variables])
myoptimizer.apply_gradients(zip(encoder_gradients,encoder_model.trainable_variables))
myoptimizer.apply_gradients(zip(decoder_gradients,decoder_model.trainable_variables))
# BLOCK 2, in one operation
gradients = tape.gradient(loss,encoder_model.trainable_variables + decoder_model.trainable_variables)
myoptimizer.apply_gradients(zip(gradients,encoder_model.trainable_variables +
decoder_model.trainable_variables))

您可以手动验证。

首先,让我们简化模型。让编码器和解码器都是一个单一的密集层。这主要是为了简单起见,您可以打印出正在应用渐变的权重、应用渐变后的渐变和权重。

import tensorflow as tf
import numpy as np
from copy import deepcopy

# create a simple model with one encoder and one decoder layer. 
class custom_net(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.encoder = tf.keras.layers.Dense(3, activation='relu')
        self.decoder = tf.keras.layers.Dense(3, activation='relu')
        
    def call(self, inp):
        return self.decoder(self.encoder(inp))

net = model()

# create dummy input/output
inp = np.random.randn(1,1)
gt = np.random.randn(3,1)

# set persistent to true since we will be accessing the gradient 2 times
with tf.GradientTape(persistent=True) as tape:
    out = custom_model(inp)
    loss = tf.keras.losses.mean_squared_error(gt, out)
    
# get the gradients as mentioned in the question
enc_grad, dec_grad = tape.gradient(loss,
                             [net.encoder.trainable_variables, 
                              net.decoder.trainable_variables])
gradients = tape.gradient(loss,
                          net.encoder.trainable_variables + net.decoder.trainable_variables)

首先,让我们使用像 SGD 这样的无状态优化器,它根据以下公式更新权重,并将其与问题中提到的 2 种方法进行比较。

new_weights = weights - learning_rate * gradients.

# Block 1

myoptimizer = tf.keras.optimizers.SGD(learning_rate=1)

# store weights before updating the weights based on the gradients 
old_enc_weights = deepcopy(net.encoder.get_weights())
old_dec_weights = deepcopy(net.decoder.get_weights())

myoptimizer.apply_gradients(zip(enc_grad, net.encoder.trainable_variables))
myoptimizer.apply_gradients(zip(dec_grad, net.decoder.trainable_variables))

# manually calculate the weights after gradient update
# since the learning rate is 1, new_weights = weights - grad
cal_enc_weights = []
for weights, grad in zip(old_enc_weights, enc_grad):
    cal_enc_weights.append(weights-grad)

cal_dec_weights = []
for weights, grad in zip(old_dec_weights, dec_grad):
    cal_dec_weights.append(weights-grad)    
    
for weights, man_calc_weight in zip(net.encoder.get_weights(), cal_enc_weights):
    print(np.linalg.norm(weights-man_calc_weight))

for weights, man_calc_weight in zip(net.decoder.get_weights(), cal_dec_weights):
    print(np.linalg.norm(weights-man_calc_weight))

# block 2 
old_weights = deepcopy(net.encoder.trainable_variables + net.decoder.trainable_variables)
myoptimizer.apply_gradients(zip(gradients, net.encoder.trainable_variables + \
                                net.decoder.trainable_variables))
cal_weights = []
for weight, grad in zip(old_weights, gradients):
    cal_weights.append(weight-grad) 
    
for weight, man_calc_weight in zip(net.encoder.trainable_variables + net.decoder.trainable_variables, cal_weights):
    print(np.linalg.norm(weight-man_calc_weight))   

您将看到这两种方法以完全相同的方式更新权重。

我认为您使用了像 Adam/RMSProp 这样的有状态优化器。对于这样的优化器,调用 apply_gradients 将根据梯度值和符号更新优化器参数。在第一种情况下,优化器参数更新两次,在第二种情况下仅更新一次。

如果我是你,我会坚持第二种选择,因为你在这里只执行了一个优化步骤。