如何使用张量流编写摘要日志以对 MNIST 数据进行逻辑回归?
How to write summary log using tensorflow for logistic regression on MNIST data?
我是 tensorflow
和 tensorboard
的新手。这是我第一次使用 tensorflow 在 MNIST 数据上实现 logistic regression
。我已经成功地对数据实施了逻辑回归,现在我正在尝试使用 tf.summary .fileWriter
将摘要记录到日志文件中。
这是影响汇总参数的代码
x = tf.placeholder(dtype=tf.float32, shape=(None, 784))
y = tf.placeholder(dtype=tf.float32, shape=(None, 10))
loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar("loss", loss_op)
tf.summary.scalar("training_accuracy", accuracy_op)
summary_op = tf.summary.merge_all()
这就是我训练模型的方式
with tf.Session() as sess:
sess.run(init)
writer = tf.summary.FileWriter('./graphs', sess.graph)
for iter in range(50):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_, loss, tr_acc,summary = sess.run([optimizer_op, loss_op, accuracy_op, summary_op], feed_dict={x: batch_x, y: batch_y})
summary = sess.run(summary_op, feed_dict={x: batch_x, y: batch_y})
writer.add_summary(summary, iter)
添加摘要行以获取合并摘要后,出现以下错误
InvalidArgumentError (see above for traceback):
You must feed a value for placeholder tensor 'Placeholder_37'
with dtype float and shape [?,10]
这个错误指向Y
的声明
y = tf.placeholder(dtype=tf.float32, shape=(None, 10))
你能帮我看看我做错了什么吗?
从错误消息来看,您似乎是在某种 jupyter 环境中 运行调整您的代码。尝试重新启动 kernel/runtime 和 运行 一切。 运行 图形模式下的代码在 jupyter 中运行不佳。如果我 运行 我的代码,下面,第一次它没有 return 任何错误,当我 运行 它第二次(w/o 重新启动 kernel/runtime)然后它和你一样崩溃。
我懒得在实际模型上检查它所以我 pred=y
。 ;)
但是下面的代码不会崩溃,因此您应该能够根据自己的需要进行调整。我已经在 Google Colab 中对其进行了测试。
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
x = tf.placeholder(dtype=tf.float32, shape=(None, 784), name='x-input')
y = tf.placeholder(dtype=tf.float32, shape=(None, 10), name='y-input')
pred = y
loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.name_scope('summaries'):
tf.summary.scalar("loss", loss_op, collections=["train_summary"])
tf.summary.scalar("training_accuracy", accuracy_op, collections=["train_summary"])
with tf.Session() as sess:
summary_op = tf.summary.merge_all(key='train_summary')
train_writer = tf.summary.FileWriter('./graphs', sess.graph)
sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])
for iter in range(50):
batch_x, batch_y = mnist.train.next_batch(1)
loss, acc, summary = sess.run([loss_op, accuracy_op, summary_op], feed_dict={x:batch_x, y:batch_y})
train_writer.add_summary(summary, iter)
我是 tensorflow
和 tensorboard
的新手。这是我第一次使用 tensorflow 在 MNIST 数据上实现 logistic regression
。我已经成功地对数据实施了逻辑回归,现在我正在尝试使用 tf.summary .fileWriter
将摘要记录到日志文件中。
这是影响汇总参数的代码
x = tf.placeholder(dtype=tf.float32, shape=(None, 784))
y = tf.placeholder(dtype=tf.float32, shape=(None, 10))
loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar("loss", loss_op)
tf.summary.scalar("training_accuracy", accuracy_op)
summary_op = tf.summary.merge_all()
这就是我训练模型的方式
with tf.Session() as sess:
sess.run(init)
writer = tf.summary.FileWriter('./graphs', sess.graph)
for iter in range(50):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_, loss, tr_acc,summary = sess.run([optimizer_op, loss_op, accuracy_op, summary_op], feed_dict={x: batch_x, y: batch_y})
summary = sess.run(summary_op, feed_dict={x: batch_x, y: batch_y})
writer.add_summary(summary, iter)
添加摘要行以获取合并摘要后,出现以下错误
InvalidArgumentError (see above for traceback):
You must feed a value for placeholder tensor 'Placeholder_37'
with dtype float and shape [?,10]
这个错误指向Y
y = tf.placeholder(dtype=tf.float32, shape=(None, 10))
你能帮我看看我做错了什么吗?
从错误消息来看,您似乎是在某种 jupyter 环境中 运行调整您的代码。尝试重新启动 kernel/runtime 和 运行 一切。 运行 图形模式下的代码在 jupyter 中运行不佳。如果我 运行 我的代码,下面,第一次它没有 return 任何错误,当我 运行 它第二次(w/o 重新启动 kernel/runtime)然后它和你一样崩溃。
我懒得在实际模型上检查它所以我 pred=y
。 ;)
但是下面的代码不会崩溃,因此您应该能够根据自己的需要进行调整。我已经在 Google Colab 中对其进行了测试。
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
x = tf.placeholder(dtype=tf.float32, shape=(None, 784), name='x-input')
y = tf.placeholder(dtype=tf.float32, shape=(None, 10), name='y-input')
pred = y
loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.name_scope('summaries'):
tf.summary.scalar("loss", loss_op, collections=["train_summary"])
tf.summary.scalar("training_accuracy", accuracy_op, collections=["train_summary"])
with tf.Session() as sess:
summary_op = tf.summary.merge_all(key='train_summary')
train_writer = tf.summary.FileWriter('./graphs', sess.graph)
sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])
for iter in range(50):
batch_x, batch_y = mnist.train.next_batch(1)
loss, acc, summary = sess.run([loss_op, accuracy_op, summary_op], feed_dict={x:batch_x, y:batch_y})
train_writer.add_summary(summary, iter)