如何将恢复图的输出张量连接到张量流中默认图的输入?
How to connect the output tensor of a restored graph to the input of the default graph in tensorflow?
刚接触tensorflow,卡在这好几天了。
现在我有以下预训练模型(4 个文件):
Classification.inception.model-27.data-0000-pf=00001
Classification.inception.model-27.index
Classification.inception.model-27.meta
checkpoint
而且我可以成功地将此模型恢复为新文件中的默认图表 test.py:
with tf.Session() as sess:
new_restore = tf.train.import_meta_graph('Classification.inception.model-27.meta')
new_restore.restore(sess, tf.train.latest_checkpoint('/'))
graph = tf.get_default_graph()
input_data = graph.get_tensor_by_name('input_data')
output = graph.get_tensor_by_name('logits/BiasAdd:0')
......
logits = sess.run(output, feed_dict = {input_data: mybatch})
......
上面的脚本运行良好,因为 test.py 独立于 train.py。所以我这样得到的图就是默认的图
但是,我不知道如何将这个预训练模型集成到现有图中,即将张量 "output" 传递到新网络(python 代码,而不是恢复图),例如这个:
def main():
### load the meta file and restore the pretrained graph here #####
new_restore = tf.train.import_meta_graph('Classification.inception.model-27.meta')
new_restore.restore(sess, tf.train.latest_checkpoint('/'))
graph = tf.get_default_graph()
input_data = graph.get_tensor_by_name('input_data')
output1 = graph.get_tensor_by_name('logits/BiasAdd:0')
......
with tf.Graph().as_default():
with tf.variable_scope(scope, 'InceptionResnetV1', [inputs], reuse=reuse):
with slim.arg_scope([slim.batch_norm, slim.dropout], is_training = is_training):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d]):
net = slim.conv2d(output1, 32, 3, stride = 2, scope= 'Conv2d_1a_3x3')
但是,当我将tensor output1 传递给slim.conv2d() 时出现错误。消息是:
ValueError:Tensor("InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/AssignMovingAvg:0".shape=(32,).dtype=float32_ref) 不是这个图的元素
人们通常如何处理这个问题(从 .meta 恢复一个图并将其输出张量连接到当前默认图的输入)?
我在网上搜索并发现了与我的问题相似的东西(即 )。但是我觉得还是很不一样的。
此外,还有一些类似的方法可以恢复“.ckpt”文件,但我认为它们仍然不是我要找的。
如有任何意见和指导,我们将不胜感激。谢谢
您的问题是 with tf.Graph().as_default():
覆盖了您的旧图表:
Another typical usage involves the tf.Graph.as_default context manager, which overrides the current default graph for the lifetime of the context.
只需删除此行即可保留旧图表:
import tensorflow as tf
import numpy as np
const_input_dummy = np.random.randn(1, 28)
# create graph and save everything
x = tf.placeholder(dtype=tf.float32, shape=[1, 28], name='plhdr')
y = tf.layers.dense(x, 2, name='logits')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(y, {x: const_input_dummy}))
saver = tf.train.Saver()
saver.save(sess, './export/inception')
# reset everything so far (like creating another script)
tf.reset_default_graph()
# answer to question
with tf.Session() as sess:
# import old graph structure
restorer = tf.train.import_meta_graph('./export/inception.meta')
# get reference to tensors from imported graph
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("plhdr:0")
y = graph.get_tensor_by_name('logits/BiasAdd:0')
# add some new operations (and variables)
with tf.variable_scope('new_scope'):
y = tf.layers.dense(y, 1, name='other_layer')
# init all variables ...
sess.run(tf.global_variables_initializer())
# ... then restore variables from file
restorer.restore(sess, tf.train.latest_checkpoint('./export'))
# this will execute without errors
print(sess.run(y, {x: const_input_dummy}))
通常不需要维护多个图。所以我建议只使用一个图。
刚接触tensorflow,卡在这好几天了。 现在我有以下预训练模型(4 个文件):
Classification.inception.model-27.data-0000-pf=00001
Classification.inception.model-27.index
Classification.inception.model-27.meta
checkpoint
而且我可以成功地将此模型恢复为新文件中的默认图表 test.py:
with tf.Session() as sess:
new_restore = tf.train.import_meta_graph('Classification.inception.model-27.meta')
new_restore.restore(sess, tf.train.latest_checkpoint('/'))
graph = tf.get_default_graph()
input_data = graph.get_tensor_by_name('input_data')
output = graph.get_tensor_by_name('logits/BiasAdd:0')
......
logits = sess.run(output, feed_dict = {input_data: mybatch})
......
上面的脚本运行良好,因为 test.py 独立于 train.py。所以我这样得到的图就是默认的图
但是,我不知道如何将这个预训练模型集成到现有图中,即将张量 "output" 传递到新网络(python 代码,而不是恢复图),例如这个:
def main():
### load the meta file and restore the pretrained graph here #####
new_restore = tf.train.import_meta_graph('Classification.inception.model-27.meta')
new_restore.restore(sess, tf.train.latest_checkpoint('/'))
graph = tf.get_default_graph()
input_data = graph.get_tensor_by_name('input_data')
output1 = graph.get_tensor_by_name('logits/BiasAdd:0')
......
with tf.Graph().as_default():
with tf.variable_scope(scope, 'InceptionResnetV1', [inputs], reuse=reuse):
with slim.arg_scope([slim.batch_norm, slim.dropout], is_training = is_training):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d]):
net = slim.conv2d(output1, 32, 3, stride = 2, scope= 'Conv2d_1a_3x3')
但是,当我将tensor output1 传递给slim.conv2d() 时出现错误。消息是:
ValueError:Tensor("InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/AssignMovingAvg:0".shape=(32,).dtype=float32_ref) 不是这个图的元素
人们通常如何处理这个问题(从 .meta 恢复一个图并将其输出张量连接到当前默认图的输入)?
我在网上搜索并发现了与我的问题相似的东西(即
此外,还有一些类似的方法可以恢复“.ckpt”文件,但我认为它们仍然不是我要找的。
如有任何意见和指导,我们将不胜感激。谢谢
您的问题是 with tf.Graph().as_default():
覆盖了您的旧图表:
Another typical usage involves the tf.Graph.as_default context manager, which overrides the current default graph for the lifetime of the context.
只需删除此行即可保留旧图表:
import tensorflow as tf
import numpy as np
const_input_dummy = np.random.randn(1, 28)
# create graph and save everything
x = tf.placeholder(dtype=tf.float32, shape=[1, 28], name='plhdr')
y = tf.layers.dense(x, 2, name='logits')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(y, {x: const_input_dummy}))
saver = tf.train.Saver()
saver.save(sess, './export/inception')
# reset everything so far (like creating another script)
tf.reset_default_graph()
# answer to question
with tf.Session() as sess:
# import old graph structure
restorer = tf.train.import_meta_graph('./export/inception.meta')
# get reference to tensors from imported graph
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("plhdr:0")
y = graph.get_tensor_by_name('logits/BiasAdd:0')
# add some new operations (and variables)
with tf.variable_scope('new_scope'):
y = tf.layers.dense(y, 1, name='other_layer')
# init all variables ...
sess.run(tf.global_variables_initializer())
# ... then restore variables from file
restorer.restore(sess, tf.train.latest_checkpoint('./export'))
# this will execute without errors
print(sess.run(y, {x: const_input_dummy}))
通常不需要维护多个图。所以我建议只使用一个图。