Tensorflow probability - 双射训练

Tensorflow probability - Bijector training

我一直在尝试遵循此 tutorial 中的示例,但我无法训练任何变量。

我写了一个小例子,但我也没能做到:

# Train a shift bijector
shift = tf.Variable(initial_value=tf.convert_to_tensor([1.0], dtype=tf.float32), trainable=True, name='shift_var')
bijector = tfp.bijectors.Shift(shift=shift)

# Input
x = tf.convert_to_tensor(np.array([0]), dtype=tf.float32)
target = tf.convert_to_tensor(np.array([2]), dtype=tf.float32)

optimizer = tf.optimizers.Adam(learning_rate=0.5)
nsteps = 1

print(bijector(x).numpy(), bijector.shift)
for _ in range(nsteps):

    with tf.GradientTape() as tape:
        out = bijector(x)
        loss = tf.math.square(tf.math.abs(out - target))
        #print(out, loss)
    
        gradients = tape.gradient(loss, bijector.trainable_variables)
    
    optimizer.apply_gradients(zip(gradients, bijector.trainable_variables))
    
print(bijector(x).numpy(), bijector.shift)

对于 nsteps = 1,两个打印语句产生以下输出:

[1.] <tf.Variable 'shift_var:0' shape=(1,) dtype=float32, numpy=array([1.], dtype=float32)>
[1.] <tf.Variable 'shift_var:0' shape=(1,) dtype=float32, numpy=array([1.4999993], dtype=float32)>

尽管 bijector.shift 的打印值已更新,但 bijector 似乎仍使用原始 shift

我无法增加 nsteps,因为在第一次迭代后梯度为 None,我得到了这个错误:

ValueError: No gradients provided for any variable: ['shift_var:0'].

我正在使用

tensorflow version 2.3.0
tensorflow-probability version 0.11.0

我也在colab笔记本上试过,怀疑是版本问题

仍然不确定我是否完全理解这里发生的事情,但至少我现在可以让我的示例运行了。

出于某种原因,如果我将其包装在继承自 tf.keras.Model 的 class 中,行为会有所不同:

class BijectorModel(tf.keras.Model):

    def __init__(self):
        super().__init__()

        self.shift = tf.Variable(initial_value=tf.convert_to_tensor([1.5], dtype=tf.float32), trainable=True, name='shift_var')
        self.bijector = tfp.bijectors.Shift(shift=self.shift)

    def call(self, input):
        return self.bijector(input)

我做了一个训练迭代的函数,虽然这似乎没有必要:

def training_iteration(model, input, target):

    optimizer = tf.optimizers.SGD(learning_rate=0.1)

    with tf.GradientTape() as tape:

        loss = tf.math.square(tf.math.abs(model(input) - target))

        gradients = tape.gradient(loss, model.trainable_variables)

    optimizer.apply_gradients(zip(gradients, model.trainable_variables))

这样执行

x = tf.convert_to_tensor(np.array([0]), dtype=tf.float32)
target = tf.convert_to_tensor(np.array([2]), dtype=tf.float32)
model = BijectorModel()

nsteps = 10
for _ in range(nsteps):
    training_iteration(model, x, target)
    print('Iteration {}: Output {}'.format(_, model(x)))

产生 expect/desired 输出:

Iteration 0: Output [1.6]
Iteration 1: Output [1.6800001]
Iteration 2: Output [1.7440001]
Iteration 3: Output [1.7952001]
Iteration 4: Output [1.8361601]
Iteration 5: Output [1.8689281]
Iteration 6: Output [1.8951424]
Iteration 7: Output [1.916114]
Iteration 8: Output [1.9328911]
Iteration 9: Output [1.9463129]

我的结论是,与通过 bijector-object.

访问相比,作为模型的一部分,可训练变量的处理方式不同

您发现了一个错误。双射器前向函数弱缓存结果->输入映射以使下游逆和 log-determinants 快速。但不知何故,这也干扰了梯度。解决方法是添加 del out,如 https://colab.research.google.com/gist/brianwa84/04249c2e9eb089c2d748d05ee2c32762/bijector-cache-bug.ipynb