Error: Tensorflow BRNN logits and labels must be same size

Error: Tensorflow BRNN logits and labels must be same size

我有这样的错误:

InvalidArgumentError (see above for traceback): logits and labels must
be same size: logits_size=[10,9] labels_size=[7040,9]  [[Node:
SoftmaxCrossEntropyWithLogits =
SoftmaxCrossEntropyWithLogits[T=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/gpu:0"](Reshape, Reshape_1)]]

但是我找不到发生这个错误的张量....我认为它是由大小不匹配出现的...

我的输入大小是 batch_size * n_steps * n_input

所以,它将是 10*704*100,我想输出

batch_size * n_steps * n_classes => 它将通过 10*700*9,通过双向 RNN

我应该如何更改此代码以修复错误?

batch_size表示这样的数据个数:

数据 1:ABCABCAAADDD... ... 数据 10:ABCCCCABCDBBAA...

并且 n_step表示每个数据的长度(数据被'O'填充固定每个数据的长度):704

并且 n_input 表示数据如何表示每个数据中的每个字母,如下所示: A - [1, 2, 1, -1, ..., -1]

学习的输出应该是这样的: 数据 1 的输出:XYZYXYZYYXY ... ... 数据 10 的输出:ZXYYRZYZZ ...

输出的每个字母受输入字母的周围和顺序的影响。

learning_rate = 0.001
training_iters = 100000
batch_size = 10
display_step = 10
# Network Parameters
n_input = 100 
n_steps = 704 # timesteps
n_hidden = 50 # hidden layer num of features
n_classes = 9 

x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_steps, n_classes])

weights = {
    'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))
}
biases = {
    'out': tf.Variable(tf.random_normal([n_classes]))
}
def BiRNN(x, weights, biases):
    x = tf.unstack(tf.transpose(x, perm=[1, 0, 2]))

    # Forward direction cell
    lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
    # Backward direction cell
    lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
    # Get lstm cell output
    try:
        outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                          dtype=tf.float32)
    except Exception: # Old TensorFlow version only returns outputs not states
       outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                    dtype=tf.float32)
    # Linear activation, using rnn inner loop last output
    return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = BiRNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    step = 1
    while step * batch_size < training_iters:
        batch_x, batch_y = next_batch(batch_size, r_big_d, y_r_big_d)
        #batch_x = batch_x.reshape((batch_size, n_steps, n_input))
        # Run optimization op (backprop)
       sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
        if step % display_step == 0:
            # Calculate batch accuracy
            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
            # Calculate batch loss
            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
            print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                  "{:.6f}".format(loss) + ", Training Accuracy= " + \
                  "{:.5f}".format(acc))
        step += 1
    print("Optimization Finished!")
    test_x, test_y = next_batch(batch_size, v_big_d, y_v_big_d)
    print("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={x: test_x, y: test_y}))

static_bidirectional_rnn 的第一个 return 值是一个张量列表 - 每个 rnn 步一个。通过仅使用 tf.matmul 中的最后一个,您将失去所有其余部分。相反,将它们堆叠成适当形状的单个张量,为 matmul 整形,然后再整形。

outputs = tf.stack(outputs, axis=1)
outputs = tf.reshape(outputs, (batch_size*n_steps, n_hidden))
outputs = tf.matmul(outputs, weights['out']) + biases['out']
outputs = tf.reshape(outputs, (batch_size, n_steps, n_classes))

或者,您可以使用 tf.einsum:

outputs = tf.stack(outputs, axis=1)
outputs = tf.einsum('ijk,kl->ijl', outputs, weights['out']) + biases['out']