gpflow 2 中的优化:为什么设置 autograph=False?
optimization in gpflow 2: Why set autograph=False?
在当前笔记本教程(gpflow 2.0)中,所有@tf.function标签都包含该选项
autograph=False,例如(https://gpflow.readthedocs.io/en/2.0.0-rc1/notebooks/advanced/gps_for_big_data.html):
@tf.function(autograph=False)
def optimization_step(optimizer, model: gpflow.models.SVGP, batch):
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(model.trainable_variables)
objective = - model.elbo(*batch)
grads = tape.gradient(objective, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return objective
有谁知道为什么会这样,或者这背后的原因是什么?
据我了解,autograph=True
仅允许将 python 控制流转换为图形结构。 setting/leaving 它是否为真,即使不需要该功能,有任何缺点吗?
我的猜测是它在图形的编译时只是一个很小的开销,但应该可以忽略不计。有错吗?
谢谢
我们在大多数 tf.function
包装目标中将 autograph
设置为 False
的原因是因为 GPflow 使用内部使用生成器的多调度 Dispatcher。然而,TensorFlow 无法处理签名模式下的生成器对象(参见 Capabilities and Limitations of AutoGraph),这导致了这些警告:
WARNING:tensorflow:Entity <bound method Dispatcher.dispatch_iter of <dispatched sample_conditional>> appears to be a generator function. It will not be converted by AutoGraph.
WARNING: Entity <bound method Dispatcher.dispatch_iter of <dispatched sample_conditional>> appears to be a generator function. It will not be converted by AutoGraph.
WARNING:tensorflow:Entity <bound method Dispatcher.dispatch_iter of <dispatched conditional>> appears to be a generator function. It will not be converted by AutoGraph.
WARNING: Entity <bound method Dispatcher.dispatch_iter of <dispatched conditional>> appears to be a generator function. It will not be converted by AutoGraph.
我们已经知道这个问题有一段时间了,但还没有抽出时间来实际解决它 - 感谢您重新引起我们的注意。我刚刚创建了一个 PR 来解决这个问题,并且不再需要您将 autograph 设置为 False。我希望这个 PR 很快就能合并。
在当前笔记本教程(gpflow 2.0)中,所有@tf.function标签都包含该选项 autograph=False,例如(https://gpflow.readthedocs.io/en/2.0.0-rc1/notebooks/advanced/gps_for_big_data.html):
@tf.function(autograph=False)
def optimization_step(optimizer, model: gpflow.models.SVGP, batch):
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(model.trainable_variables)
objective = - model.elbo(*batch)
grads = tape.gradient(objective, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return objective
有谁知道为什么会这样,或者这背后的原因是什么?
据我了解,autograph=True
仅允许将 python 控制流转换为图形结构。 setting/leaving 它是否为真,即使不需要该功能,有任何缺点吗?
我的猜测是它在图形的编译时只是一个很小的开销,但应该可以忽略不计。有错吗?
谢谢
我们在大多数 tf.function
包装目标中将 autograph
设置为 False
的原因是因为 GPflow 使用内部使用生成器的多调度 Dispatcher。然而,TensorFlow 无法处理签名模式下的生成器对象(参见 Capabilities and Limitations of AutoGraph),这导致了这些警告:
WARNING:tensorflow:Entity <bound method Dispatcher.dispatch_iter of <dispatched sample_conditional>> appears to be a generator function. It will not be converted by AutoGraph.
WARNING: Entity <bound method Dispatcher.dispatch_iter of <dispatched sample_conditional>> appears to be a generator function. It will not be converted by AutoGraph.
WARNING:tensorflow:Entity <bound method Dispatcher.dispatch_iter of <dispatched conditional>> appears to be a generator function. It will not be converted by AutoGraph.
WARNING: Entity <bound method Dispatcher.dispatch_iter of <dispatched conditional>> appears to be a generator function. It will not be converted by AutoGraph.
我们已经知道这个问题有一段时间了,但还没有抽出时间来实际解决它 - 感谢您重新引起我们的注意。我刚刚创建了一个 PR 来解决这个问题,并且不再需要您将 autograph 设置为 False。我希望这个 PR 很快就能合并。