tf.train.Checkpoint 和 tf.train.Saver 之间的区别
Difference between tf.train.Checkpoint and tf.train.Saver
我发现在 Tensorflow
中 save/restore 模型和变量有不同的方法。这些方式包括:
在tensorflow的文档中,我发现了它们之间的一些区别:
tf.saved_model
是 tf.train.Saver
的薄包装
tf.train.Checkpoint
支持即时执行但 tf.train.Saver
不支持 .
tf.train.Checkpoint
没有创建 .meta
文件但仍然可以加载图结构(这是个大问题!它是怎么做到的?)
tf.train.Checkpoint
如何在没有 .meta
文件的情况下加载图表?或者更一般地说,tf.train.Saver
和 tf.train.Checkpoint
之间有什么区别?
根据 Tensorflow docs:
Checkpoint.save
and Checkpoint.restore
write and read object-based
checkpoints, in contrast to tf.train.Saver
which writes and reads
variable.name based checkpoints. Object-based checkpointing saves a
graph of dependencies between Python objects (Layers, Optimizers,
Variables, etc.) with named edges, and this graph is used to match
variables when restoring a checkpoint. It can be more robust to
changes in the Python program, and helps to support restore-on-create
for variables when executing eagerly. Prefer tf.train.Checkpoint
over
tf.train.Saver
for new code.
我发现在 Tensorflow
中 save/restore 模型和变量有不同的方法。这些方式包括:
在tensorflow的文档中,我发现了它们之间的一些区别:
tf.saved_model
是tf.train.Saver
的薄包装
tf.train.Checkpoint
支持即时执行但tf.train.Saver
不支持 .tf.train.Checkpoint
没有创建.meta
文件但仍然可以加载图结构(这是个大问题!它是怎么做到的?)
tf.train.Checkpoint
如何在没有 .meta
文件的情况下加载图表?或者更一般地说,tf.train.Saver
和 tf.train.Checkpoint
之间有什么区别?
根据 Tensorflow docs:
Checkpoint.save
andCheckpoint.restore
write and read object-based checkpoints, in contrast totf.train.Saver
which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. Prefertf.train.Checkpoint
overtf.train.Saver
for new code.