如何将急切执行中的模型转换为静态图并保存在.pb文件中?

How to convert model in eager execution to static graph and save in .pb file?

假设我有模型 (tf.keras.Model):

class ContextExtractor(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.model = self.__get_model()

    def call(self, x, training=False, **kwargs):
        features = self.model(x, training=training)
        return features

    def __get_model(self):
        return self.__get_small_conv()

    def __get_small_conv(self):
        model = tf.keras.Sequential()
        model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(64, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(128, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(256, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))


        model.add(layers.GlobalAveragePooling2D())

        return model

我对它进行了训练并使用如下方法保存了它:

   checkpoint = tf.train.Checkpoint(
                model=self.model,
                global_step=tf.train.get_or_create_global_step())
   checkpoint.save(weights_path / f'epoch_{epoch}')

这意味着我有两个保存的文件:epoch_10-2.indexepoch_10-2.data-00000-of-00001

现在我想部署我的模型。我想获取 .pb 文件。我怎么才能得到它?我想我需要以图形模式打开我的模型,加载我的权重并将其保存在 pb.file 中。具体怎么做呢?

您应该获得会话:

tf.keras.backend.get_session()

然后冻结模型,如此处所做的示例 https://www.dlology.com/blog/how-to-convert-trained-keras-model-to-tensorflow-and-make-prediction/

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    """
    Freezes the state of a session into a pruned computation graph.

    Creates a new computation graph where variable nodes are replaced by
    constants taking their current value in the session. The new graph will be
    pruned so subgraphs that are not necessary to compute the requested
    outputs are removed.
    @param session The TensorFlow session to be frozen.
    @param keep_var_names A list of variable names that should not be frozen,
                          or None to freeze all the variables in the graph.
    @param output_names Names of the relevant graph outputs.
    @param clear_devices Remove the device directives from the graph for better portability.
    @return The frozen graph definition.
    """
    from tensorflow.python.framework.graph_util import convert_variables_to_constants
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        # Graph -> GraphDef ProtoBuf
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = convert_variables_to_constants(session, input_graph_def,
                                                      output_names, freeze_var_names)
        return frozen_graph


frozen_graph = freeze_session(K.get_session(),
                              output_names=[out.op.name for out in model.outputs])

然后将模型保存为.pb(也显示在link):

tf.train.write_graph(frozen_graph, "model", "tf_model.pb", as_text=False)

如果这太麻烦尝试将keras模型保存为.h5(HDF5类型文件),然后按照提供的link中的说明进行操作。

来自张量流文档:

Write compatible code The same code written for eager execution will also build a graph during graph execution. Do this by simply running the same code in a new Python session where eager execution is not enabled.

也来自同一页面:

To save and load models, tf.train.Checkpoint stores the internal state of objects, without requiring hidden variables. To record the state of a model, an optimizer, and a global step, pass them to a tf.train.Checkpoint:

checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
                           model=model,
                           optimizer_step=tf.train.get_or_create_global_step())

root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))

我向您推荐本页的最后一部分:https://www.tensorflow.org/guide/eager

希望这对您有所帮助。

感谢@BCJuan 提供信息,我找到了解决方案

想找我问题答案的小伙伴们,请往下看。

注意:我想您已经在 checkpoint_dir 中保存了模型,并希望以图形模式获取此模型,以便您可以将其保存为 .pb 文件。

model = ContextExtractor()

predictions = model(images, training=False)

checkpoint = tf.train.Checkpoint(model=model, global_step=tf.train.get_or_create_global_step())
status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
status.assert_consumed()

with tf.Session() as sess:
    status.initialize_or_restore(sess) # this is the main line for loading

    # Actually, I don't know it is necessary to pass one batch for creating graph or not   
    img_batch = get_image(...) 
    ans = sess.run(predictions, feed_dict={images: img_batch})

    frozen_graph = freeze_session(sess, output_names=[out.op.name for out in model.outputs])

# save your model
tf.train.write_graph(frozen_graph, "where/to/save", "tf_model.pb", as_text=False)