如何为 TensorBoard 使用带有“tf.train.Supervisor”的多个摘要编写器
How to use multiple summary writers with `tf.train.Supervisor` for TensorBoard
我想做一些类似于 TensorBoard tutorial 中的 train_writer
和 test_writer
的事情。但是使用 tf.train.Supervisor
。然而,我不确定如何最好地解决这个问题。
伪代码:
train_op = #...
train_summaries = # ...
test_summaries = # ...
config = tf.ConfigProto(allow_soft_placement=True)
sv = tf.train.Supervisor(
logdir = ????,
summary_op = ????,
summary_writer = ????,
)
with sv.managed_session(config=config) as sess:
while not sv.should_stop():
sess.run(train_op)
所以我的问题是:如何保存 train_summaries
和 test_summaries
做不同的目录? 例如./logdir/train
和 ./logdir/test/
.
您正在寻找 summary_computed
。文档字符串显示了如何创建自定义摘要编写器。你不能让 Supervisor 自动管理它,但它很简单。来自 https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/supervisor.py
# Create a Supervisor with no automatic summaries.
sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)
# As summary_op was None, managed_session() does not start the
# summary thread.
with sv.managed_session(FLAGS.master) as sess:
for step in xrange(1000000):
if sv.should_stop():
break
if is_chief and step % 100 == 0:
# Create the summary every 100 chief steps.
sv.summary_computed(sess, sess.run(my_summary_op))
else:
# Train normally
sess.run(my_train_op)
我想做一些类似于 TensorBoard tutorial 中的 train_writer
和 test_writer
的事情。但是使用 tf.train.Supervisor
。然而,我不确定如何最好地解决这个问题。
伪代码:
train_op = #...
train_summaries = # ...
test_summaries = # ...
config = tf.ConfigProto(allow_soft_placement=True)
sv = tf.train.Supervisor(
logdir = ????,
summary_op = ????,
summary_writer = ????,
)
with sv.managed_session(config=config) as sess:
while not sv.should_stop():
sess.run(train_op)
所以我的问题是:如何保存 train_summaries
和 test_summaries
做不同的目录? 例如./logdir/train
和 ./logdir/test/
.
您正在寻找 summary_computed
。文档字符串显示了如何创建自定义摘要编写器。你不能让 Supervisor 自动管理它,但它很简单。来自 https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/supervisor.py
# Create a Supervisor with no automatic summaries.
sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)
# As summary_op was None, managed_session() does not start the
# summary thread.
with sv.managed_session(FLAGS.master) as sess:
for step in xrange(1000000):
if sv.should_stop():
break
if is_chief and step % 100 == 0:
# Create the summary every 100 chief steps.
sv.summary_computed(sess, sess.run(my_summary_op))
else:
# Train normally
sess.run(my_train_op)