张量板找不到事件文件

tensorboard can't find event files

我尝试使用 tensorboard 来可视化使用 DNN 的图像分类器。我非常确定目录路径是正确的,但是没有显示任何数据。 当我试过 tensorboard --inspect --logdir='PATH/' returns: 在 logdir 'PATH/'

中找不到事件文件

我在想我的编码一定有问题。

图表

batch_size = 500

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  with tf.name_scope('train_input'):
    tf_train_dataset = tf.placeholder(tf.float32,
                                      shape=(batch_size, image_size * image_size),
                                      name = 'train_x_input')

    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels),
                                     name = 'train_y_input')
  with tf.name_scope('validation_input'):
    tf_valid_dataset = tf.constant(valid_dataset, name = 'valid_x_input')
    tf_test_dataset = tf.constant(test_dataset, name = 'valid_y_input')

  # Variables.
  with tf.name_scope('layer'):
    with tf.name_scope('weights'):
        weights = tf.Variable(
            tf.truncated_normal([image_size * image_size, num_labels]),
            name = 'W')
        variable_summaries(weights)
    with tf.name_scope('biases'):
        biases = tf.Variable(tf.zeros([num_labels]), name = 'B')
        variable_summaries(biases)
  # Training computation.
  with tf.name_scope('Wx_plus_b'):
    logits = tf.matmul(tf_train_dataset, weights) + biases
    tf.summary.histogram('logits', logits)
  with tf.name_scope('loss'):
    loss = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits),
        name = 'loss')
    tf.summary.histogram('loss', loss)
    tf.summary.scalar('loss_scalar', loss)

  # Optimizer.
  with tf.name_scope('optimizer'):
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
  test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)

运行

num_steps = 1001
t1 = time.time()
with tf.Session(graph=graph) as session:
  merged = tf.summary.merge_all()
  writer = tf.summary.FileWriter('C:/Users/Dr_Chenxy/Documents/pylogs', session.graph)
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)  # 1*128 % (200000 - 128)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]   # choose training set for this iteration
    batch_labels = train_labels[offset:(offset + batch_size), :]  # choose labels for this iteration
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 100 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
t2 = time.time()
print('Running time', t2-t1, 'seconds')

已解决。对于像我这样不擅长命令行的人来说,问题是在命令行中,不要使用引号 ('') 来标记您的目录。 假设您的数据位于 'X:\X\file.x' 首先进入命令行 X:\。 然后输入: tensorboard --logdir=X/ 不是 tensorboard --logdir='.X/'

with tf.Session() as sess:
     writer = tf.summary.FileWriter("output", sess.graph)

Windows OS.Tensorboard 输出文件夹创建在 file.py 位于 running.Therefore 的文件夹中,如果您从 Windows Documents 文件夹你可以在命令提示符下试试这个:tensorboard --logdir=C:\Users\YourName\Documents\output