tf.train.Checkpoint 和 tf.train.Saver 之间的区别

Difference between tf.train.Checkpoint and tf.train.Saver

我发现在 Tensorflow 中 save/restore 模型和变量有不同的方法。这些方式包括:

在tensorflow的文档中,我发现了它们之间的一些区别:

  1. tf.saved_modeltf.train.Saver
  2. 的薄包装
  3. tf.train.Checkpoint 支持即时执行但 tf.train.Saver 不支持 .
  4. tf.train.Checkpoint 没有创建 .meta 文件但仍然可以加载图结构(这是个大问题!它是怎么做到的?)

tf.train.Checkpoint 如何在没有 .meta 文件的情况下加载图表?或者更一般地说,tf.train.Savertf.train.Checkpoint 之间有什么区别?

根据 Tensorflow docs:

Checkpoint.save and Checkpoint.restore write and read object-based checkpoints, in contrast to tf.train.Saver which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. Prefer tf.train.Checkpoint over tf.train.Saver for new code.