为什么 TensorFlow 会给我一个错误,提示我将不正确的形状输入到占位符中?
Why does TensorFlow give me an error that I am feeding the incorrect shape and type into a placeholder?
我想不通。我一直在来回走动,我知道我可以复制并粘贴一个有效的教程,但我想了解为什么这不起作用。
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
#simple constants
learning_rate = .01
batch_size = 100
training_epoch = 10
t = 0
l = t
#gather the data
x_train = mnist.train.images
y_train = mnist.train.labels
batch_count = int(len(x_train)/batch_size)
#Set the variables
Y_ = tf.placeholder(tf.float32, [None,10], name = 'Labels')
X = tf.placeholder(tf.float32,[None,784], name = 'Inputs')
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
#Build the graph (Y = WX + b)
Y = tf.nn.softmax(tf.matmul(X,W) + b, name = 'softmax')
cross_entropy = -tf.reduce_mean(Y_ * tf.log(Y)) * 1000.0
correct_prediction = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(training_epoch):
for i in range(batch_count):
t += batch_size
print(y_train[l:t].shape)
print(x_train[l:t].shape)
print(y_train[l:t].dtype)
sess.run(train_step,feed_dict={X: x_train[l:t], Y: y_train[l:t]})
l = t
print('Epoch = ', epoch)
print("Accuracy: ", accuracy.eval(feed_dict={X: x_test, Y_: y_test}))
print('Done')
错误信息:
InvalidArgumentError: You must feed a value for placeholder tensor 'Labels_2' with dtype float and shape [?,10]
[[Node: Labels_2 = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:GPU:0"]
我也明白要使它正常工作,我需要添加的内容比我要多,但我现在想自己努力解决这个问题。我在 jupyter 笔记本上 运行 这个。我确定 y_train
的形状为 (100, 10) 并且类型为 float64.
我被困了好几天了,感谢大家的帮助。
调用 sess.run
时需要为 Y_
提供占位符张量。
在feed_dict
中,只需将Y: y_train[l:t]
改为Y_: y_train[l:t]
即可。这会将 y_train[l:t]
送入占位符。
在这一行中将 Y 更改为 Y_:
sess.run(train_step,feed_dict={X: x_train[l:t], Y_: y_train[l:t]})
我想不通。我一直在来回走动,我知道我可以复制并粘贴一个有效的教程,但我想了解为什么这不起作用。
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
#simple constants
learning_rate = .01
batch_size = 100
training_epoch = 10
t = 0
l = t
#gather the data
x_train = mnist.train.images
y_train = mnist.train.labels
batch_count = int(len(x_train)/batch_size)
#Set the variables
Y_ = tf.placeholder(tf.float32, [None,10], name = 'Labels')
X = tf.placeholder(tf.float32,[None,784], name = 'Inputs')
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
#Build the graph (Y = WX + b)
Y = tf.nn.softmax(tf.matmul(X,W) + b, name = 'softmax')
cross_entropy = -tf.reduce_mean(Y_ * tf.log(Y)) * 1000.0
correct_prediction = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(training_epoch):
for i in range(batch_count):
t += batch_size
print(y_train[l:t].shape)
print(x_train[l:t].shape)
print(y_train[l:t].dtype)
sess.run(train_step,feed_dict={X: x_train[l:t], Y: y_train[l:t]})
l = t
print('Epoch = ', epoch)
print("Accuracy: ", accuracy.eval(feed_dict={X: x_test, Y_: y_test}))
print('Done')
错误信息:
InvalidArgumentError: You must feed a value for placeholder tensor 'Labels_2' with dtype float and shape [?,10]
[[Node: Labels_2 = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:GPU:0"]
我也明白要使它正常工作,我需要添加的内容比我要多,但我现在想自己努力解决这个问题。我在 jupyter 笔记本上 运行 这个。我确定 y_train
的形状为 (100, 10) 并且类型为 float64.
我被困了好几天了,感谢大家的帮助。
调用 sess.run
时需要为 Y_
提供占位符张量。
在feed_dict
中,只需将Y: y_train[l:t]
改为Y_: y_train[l:t]
即可。这会将 y_train[l:t]
送入占位符。
在这一行中将 Y 更改为 Y_:
sess.run(train_step,feed_dict={X: x_train[l:t], Y_: y_train[l:t]})