多单元格的值错误问题 Dimensions must be equal, but are 20 and 13
Value Error problem with multicell Dimensions must be equal, but are 20 and 13
我正在使用 python 3.6.5 和 tensorflow 1.8.0
此时神经元的 Nr 为 10,本例中的输入为 3
我已经建立了一个循环神经网络,现在想改进它。我需要一些帮助!
这里是重现我的错误的代码的一小段摘录:您也可以用 LSTM 或 GRU 替换 BasicRNN 来获取其他消息。
import numpy as np
import tensorflow as tf
batch_size = 10
nr_inputs = 3
nr_outputs = 4
nr_steps = 4
nr_layers = 2
def mini_batch ( Xdata, ydata, batch_size ) :
global global_counter
result = None
Xbatch = np.zeros( shape=[batch_size, nr_steps, nr_inputs], dtype = np.float32 )
ybatch = np.zeros( shape=[batch_size, nr_outputs], dtype = np.float32 )
return Xbatch, ybatch
X = tf.placeholder( tf.float32, [ None, nr_steps, nr_inputs ] )
y = tf.placeholder( tf.float32, [ None, nr_outputs ] )
neurons = tf.contrib.rnn.BasicRNNCell(num_units = 10)
neurons = tf.contrib.rnn.MultiRNNCell( [neurons] * nr_layers, state_is_tuple = True )
X_train = np.zeros( shape=[1000, nr_steps, nr_inputs], dtype = np.float32 )
y_train = np.zeros( shape=[1000, nr_outputs], dtype = np.float32 )
X_test = np.zeros( shape=[1000, nr_steps, nr_inputs], dtype = np.float32 )
y_test = np.zeros( shape=[1000, nr_outputs], dtype = np.float32 )
rnn_outputs, rnn_states = tf.nn.dynamic_rnn( neurons, X, dtype=tf.float32 )
logits = tf.contrib.layers.fully_connected( inputs = rnn_states, num_outputs = nr_outputs, activation_fn = None )
xentropy = tf.nn.sigmoid_cross_entropy_with_logits( labels = y, logits = logits )
loss = tf.reduce_mean( xentropy )
optimizer = tf.train.AdamOptimizer( learning_rate = 0.01 )
training_op = optimizer.minimize( loss )
init = tf.global_variables_initializer()
with tf.Session() as sess :
init.run()
global_counter = 0
for epoch in range(100) :
for iteration in range( 4) :
X_batch, y_batch = mini_batch ( X_train, y_train, batch_size )
sess.run( training_op, feed_dict={ X : X_batch, y : y_batch } )
loss_train = loss.eval( feed_dict={ X : X_batch, y : y_batch } )
loss_test = loss.eval( feed_dict={ X : X_test, y : y_test } )
sess.close()
我正在尝试这个 neurons = tf.contrib.rnn.MultiRNNCell([neurons]*nr_layers, state_ist_tuple = True)
并收到错误
ValueError: Dimensions must be equal, but are 20 and 13 for 'rnn/.../MatMul1'(op 'MatMul') with input shapes [?,20], [13, 10] for a tf.contrib.rnn.BasicRNNCell(num_units = nr_neurons)
with input shapes [?,20], [13, 20] for a tf.contrib.rnn.GRUCell(num_units = nr_neurons)
和
with input shapes [?,20], [13, 40] for a tf.contrib.rnn.BasicLSTMCell(num_units = nr_neurons, state_is_tuple = True)
MatMul_1
中是否有错误?有没有人遇到过类似的问题?
非常感谢!
与其多次使用 BasicRNNCell
实例,不如为每个 RNN 层创建一个实例 - 例如以这种方式:
neurons = [tf.contrib.rnn.BasicRNNCell(num_units=10) for _ in range(nr_layers)]
neurons = tf.contrib.rnn.MultiRNNCell( neurons, state_is_tuple = True )
另外,你的代码还有其他错误。rnn_states
是一个包含cell state和hidden state的元组,它的shape是((None,10),(None,10))。
我假设你想使用隐藏状态,替换它:
logits = tf.contrib.layers.fully_connected( inputs = rnn_states[1], num_outputs = nr_outputs, activation_fn = None )
没关系!
我正在使用 python 3.6.5 和 tensorflow 1.8.0 此时神经元的 Nr 为 10,本例中的输入为 3
我已经建立了一个循环神经网络,现在想改进它。我需要一些帮助!
这里是重现我的错误的代码的一小段摘录:您也可以用 LSTM 或 GRU 替换 BasicRNN 来获取其他消息。
import numpy as np
import tensorflow as tf
batch_size = 10
nr_inputs = 3
nr_outputs = 4
nr_steps = 4
nr_layers = 2
def mini_batch ( Xdata, ydata, batch_size ) :
global global_counter
result = None
Xbatch = np.zeros( shape=[batch_size, nr_steps, nr_inputs], dtype = np.float32 )
ybatch = np.zeros( shape=[batch_size, nr_outputs], dtype = np.float32 )
return Xbatch, ybatch
X = tf.placeholder( tf.float32, [ None, nr_steps, nr_inputs ] )
y = tf.placeholder( tf.float32, [ None, nr_outputs ] )
neurons = tf.contrib.rnn.BasicRNNCell(num_units = 10)
neurons = tf.contrib.rnn.MultiRNNCell( [neurons] * nr_layers, state_is_tuple = True )
X_train = np.zeros( shape=[1000, nr_steps, nr_inputs], dtype = np.float32 )
y_train = np.zeros( shape=[1000, nr_outputs], dtype = np.float32 )
X_test = np.zeros( shape=[1000, nr_steps, nr_inputs], dtype = np.float32 )
y_test = np.zeros( shape=[1000, nr_outputs], dtype = np.float32 )
rnn_outputs, rnn_states = tf.nn.dynamic_rnn( neurons, X, dtype=tf.float32 )
logits = tf.contrib.layers.fully_connected( inputs = rnn_states, num_outputs = nr_outputs, activation_fn = None )
xentropy = tf.nn.sigmoid_cross_entropy_with_logits( labels = y, logits = logits )
loss = tf.reduce_mean( xentropy )
optimizer = tf.train.AdamOptimizer( learning_rate = 0.01 )
training_op = optimizer.minimize( loss )
init = tf.global_variables_initializer()
with tf.Session() as sess :
init.run()
global_counter = 0
for epoch in range(100) :
for iteration in range( 4) :
X_batch, y_batch = mini_batch ( X_train, y_train, batch_size )
sess.run( training_op, feed_dict={ X : X_batch, y : y_batch } )
loss_train = loss.eval( feed_dict={ X : X_batch, y : y_batch } )
loss_test = loss.eval( feed_dict={ X : X_test, y : y_test } )
sess.close()
我正在尝试这个 neurons = tf.contrib.rnn.MultiRNNCell([neurons]*nr_layers, state_ist_tuple = True)
并收到错误
ValueError: Dimensions must be equal, but are 20 and 13 for 'rnn/.../MatMul1'(op 'MatMul') with input shapes [?,20], [13, 10] for a tf.contrib.rnn.BasicRNNCell(num_units = nr_neurons)
with input shapes [?,20], [13, 20] for a tf.contrib.rnn.GRUCell(num_units = nr_neurons)
和
with input shapes [?,20], [13, 40] for a tf.contrib.rnn.BasicLSTMCell(num_units = nr_neurons, state_is_tuple = True)
MatMul_1
中是否有错误?有没有人遇到过类似的问题?
非常感谢!
与其多次使用 BasicRNNCell
实例,不如为每个 RNN 层创建一个实例 - 例如以这种方式:
neurons = [tf.contrib.rnn.BasicRNNCell(num_units=10) for _ in range(nr_layers)]
neurons = tf.contrib.rnn.MultiRNNCell( neurons, state_is_tuple = True )
另外,你的代码还有其他错误。rnn_states
是一个包含cell state和hidden state的元组,它的shape是((None,10),(None,10))。
我假设你想使用隐藏状态,替换它:
logits = tf.contrib.layers.fully_connected( inputs = rnn_states[1], num_outputs = nr_outputs, activation_fn = None )
没关系!