如何从张量流中的 RNN 模型中提取细胞状态和隐藏状态?
How to extract the cell state and hidden state from an RNN model in tensorflow?
我是 TensorFlow 的新手,理解 RNN 模块有困难。我正在尝试从 LSTM 中提取 hidden/cell 状态。
对于我的代码,我正在使用 https://github.com/aymericdamien/TensorFlow-Examples 中的实现。
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))}
biases = {'out': tf.Variable(tf.random_normal([n_classes]))}
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
# Permuting batch_size and n_steps
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_steps*batch_size, n_input)
x = tf.reshape(x, [-1, n_input])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
x = tf.split(0, n_steps, x)
# Define a lstm cell with tensorflow
#with tf.variable_scope('RNN'):
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out'], states
pred, states = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
现在我想为预测中的每个时间步提取 cell/hidden 状态。状态存储在形式为 (c,h) 的 LSTMStateTuple 中,我可以通过评估 print states
找到它。但是,尝试调用 print states.c.eval()
(根据文档应该给我张量 states.c
中的值),会产生一个错误,指出我的变量没有初始化,即使我在我之后立即调用它预测某事。代码在这里:
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
for v in tf.get_collection(tf.GraphKeys.VARIABLES, scope='RNN'):
print v.name
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
print states.c.eval()
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
step += 1
print "Optimization Finished!"
错误信息是
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
状态在 tf.all_variables()
中也不可见,只有经过训练的 matrix/bias 张量(如此处所述:Tensorflow: show or save forget gate values in LSTM)。我不想从头开始构建整个 LSTM,因为我在 states
变量中有状态,我只需要调用它。
您可以像收集准确度一样简单地收集 states
的值。
我想,pred, states, acc = sess.run(pred, states, accuracy, feed_dict={x: batch_x, y: batch_y})
应该可以正常工作。
关于您的假设的评论:"states" 确实只有上一个时间步长的 "hidden state" 和 "memory cell" 的值。
"outputs" 包含您想要的每个时间步长的 "hidden state"(输出大小为 [batch_size、seq_len、hidden_size]。所以我假设你想要 "outputs" 变量,而不是 "states"。请参阅 documentation。
不同意user3480922的回答。对于代码:
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
为了能够提取预测中每个 time_step 的隐藏状态,您必须使用输出。因为输出具有每个 time_step 的隐藏状态值。但是,我不确定是否有任何方法可以存储每个 time_step 的单元格状态值。因为 states 元组提供了单元格状态值,但仅针对最后一个 time_step.
例如,在以下具有 5 time_steps 的样本中,输出 [4,:,:], time_step = 0,...,4 的隐藏状态值为time_step=4,而状态元组 h 只有 time_step=4 的隐藏状态值。尽管状态元组 c 在 time_step=4 处具有单元格值。
outputs = [[[ 0.0589103 -0.06925126 -0.01531546 0.06108122]
[ 0.00861215 0.06067181 0.03790079 -0.04296958]
[ 0.00597713 0.03916606 0.02355802 -0.0277683 ]]
[[ 0.06252582 -0.07336216 -0.01607122 0.05024602]
[ 0.05464711 0.03219429 0.06635305 0.00753127]
[ 0.05385715 0.01259535 0.0524035 0.01696803]]
[[ 0.0853352 -0.06414541 0.02524283 0.05798233]
[ 0.10790729 -0.05008117 0.03003334 0.07391824]
[ 0.10205664 -0.04479517 0.03844892 0.0693808 ]]
[[ 0.10556188 0.0516542 0.09162509 -0.02726674]
[ 0.11425048 -0.00211394 0.06025286 0.03575509]
[ 0.11338984 0.02839304 0.08105748 0.01564003]]
**[[ 0.10072514 0.14767936 0.12387902 -0.07391471]
[ 0.10510238 0.06321315 0.08100517 -0.00940042]
[ 0.10553667 0.0984127 0.10094948 -0.02546882]]**]
states = LSTMStateTuple(c=array([[ 0.23870754, 0.24315512, 0.20842518, -0.12798975],
[ 0.23749796, 0.10797793, 0.14181322, -0.01695861],
[ 0.2413336 , 0.16692916, 0.17559692, -0.0453596 ]], dtype=float32), h=array(**[[ 0.10072514, 0.14767936, 0.12387902, -0.07391471],
[ 0.10510238, 0.06321315, 0.08100517, -0.00940042],
[ 0.10553667, 0.0984127 , 0.10094948, -0.02546882]]**, dtype=float32))
我是 TensorFlow 的新手,理解 RNN 模块有困难。我正在尝试从 LSTM 中提取 hidden/cell 状态。 对于我的代码,我正在使用 https://github.com/aymericdamien/TensorFlow-Examples 中的实现。
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))}
biases = {'out': tf.Variable(tf.random_normal([n_classes]))}
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
# Permuting batch_size and n_steps
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_steps*batch_size, n_input)
x = tf.reshape(x, [-1, n_input])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
x = tf.split(0, n_steps, x)
# Define a lstm cell with tensorflow
#with tf.variable_scope('RNN'):
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out'], states
pred, states = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
现在我想为预测中的每个时间步提取 cell/hidden 状态。状态存储在形式为 (c,h) 的 LSTMStateTuple 中,我可以通过评估 print states
找到它。但是,尝试调用 print states.c.eval()
(根据文档应该给我张量 states.c
中的值),会产生一个错误,指出我的变量没有初始化,即使我在我之后立即调用它预测某事。代码在这里:
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
for v in tf.get_collection(tf.GraphKeys.VARIABLES, scope='RNN'):
print v.name
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
print states.c.eval()
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
step += 1
print "Optimization Finished!"
错误信息是
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
状态在 tf.all_variables()
中也不可见,只有经过训练的 matrix/bias 张量(如此处所述:Tensorflow: show or save forget gate values in LSTM)。我不想从头开始构建整个 LSTM,因为我在 states
变量中有状态,我只需要调用它。
您可以像收集准确度一样简单地收集 states
的值。
我想,pred, states, acc = sess.run(pred, states, accuracy, feed_dict={x: batch_x, y: batch_y})
应该可以正常工作。
关于您的假设的评论:"states" 确实只有上一个时间步长的 "hidden state" 和 "memory cell" 的值。
"outputs" 包含您想要的每个时间步长的 "hidden state"(输出大小为 [batch_size、seq_len、hidden_size]。所以我假设你想要 "outputs" 变量,而不是 "states"。请参阅 documentation。
不同意user3480922的回答。对于代码:
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
为了能够提取预测中每个 time_step 的隐藏状态,您必须使用输出。因为输出具有每个 time_step 的隐藏状态值。但是,我不确定是否有任何方法可以存储每个 time_step 的单元格状态值。因为 states 元组提供了单元格状态值,但仅针对最后一个 time_step.
例如,在以下具有 5 time_steps 的样本中,输出 [4,:,:], time_step = 0,...,4 的隐藏状态值为time_step=4,而状态元组 h 只有 time_step=4 的隐藏状态值。尽管状态元组 c 在 time_step=4 处具有单元格值。
outputs = [[[ 0.0589103 -0.06925126 -0.01531546 0.06108122]
[ 0.00861215 0.06067181 0.03790079 -0.04296958]
[ 0.00597713 0.03916606 0.02355802 -0.0277683 ]]
[[ 0.06252582 -0.07336216 -0.01607122 0.05024602]
[ 0.05464711 0.03219429 0.06635305 0.00753127]
[ 0.05385715 0.01259535 0.0524035 0.01696803]]
[[ 0.0853352 -0.06414541 0.02524283 0.05798233]
[ 0.10790729 -0.05008117 0.03003334 0.07391824]
[ 0.10205664 -0.04479517 0.03844892 0.0693808 ]]
[[ 0.10556188 0.0516542 0.09162509 -0.02726674]
[ 0.11425048 -0.00211394 0.06025286 0.03575509]
[ 0.11338984 0.02839304 0.08105748 0.01564003]]
**[[ 0.10072514 0.14767936 0.12387902 -0.07391471]
[ 0.10510238 0.06321315 0.08100517 -0.00940042]
[ 0.10553667 0.0984127 0.10094948 -0.02546882]]**]
states = LSTMStateTuple(c=array([[ 0.23870754, 0.24315512, 0.20842518, -0.12798975],
[ 0.23749796, 0.10797793, 0.14181322, -0.01695861],
[ 0.2413336 , 0.16692916, 0.17559692, -0.0453596 ]], dtype=float32), h=array(**[[ 0.10072514, 0.14767936, 0.12387902, -0.07391471],
[ 0.10510238, 0.06321315, 0.08100517, -0.00940042],
[ 0.10553667, 0.0984127 , 0.10094948, -0.02546882]]**, dtype=float32))