TensorFlow dynamic_rnn 回归量:ValueError 维度不匹配
TensorFlow dynamic_rnn regressor: ValueError dimension mismatch
我想建立一个用于回归的玩具 LSTM 模型。 This 不错的教程对于初学者来说已经太复杂了。
给定一个长度为 time_steps
的序列,预测下一个值。考虑 time_steps=3
和序列:
array([
[[ 1.],
[ 2.],
[ 3.]],
[[ 2.],
[ 3.],
[ 4.]],
...
目标值应该是:
array([ 4., 5., ...
我定义如下模型:
# Network Parameters
time_steps = 3
num_neurons= 64 #(arbitrary)
n_features = 1
# tf Graph input
x = tf.placeholder("float", [None, time_steps, n_features])
y = tf.placeholder("float", [None, 1])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, 1]))
}
biases = {
'out': tf.Variable(tf.random_normal([1]))
}
#LSTM model
def lstm_model(X, weights, biases, learning_rate=0.01, optimizer='Adagrad'):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, time_steps, n_features)
# Required shape: 'time_steps' tensors list of shape (batch_size, n_features)
# Permuting batch_size and time_steps
input dimension: Tensor("Placeholder_:0", shape=(?, 3, 1), dtype=float32)
X = tf.transpose(X, [1, 0, 2])
transposed dimension: Tensor("transpose_41:0", shape=(3, ?, 1), dtype=float32)
# Reshaping to (time_steps*batch_size, n_features)
X = tf.reshape(X, [-1, n_features])
reshaped dimension: Tensor("Reshape_:0", shape=(?, 1), dtype=float32)
# Split to get a list of 'time_steps' tensors of shape (batch_size, n_features)
X = tf.split(0, time_steps, X)
splitted dimension: [<tf.Tensor 'split_:0' shape=(?, 1) dtype=float32>, <tf.Tensor 'split_:1' shape=(?, 1) dtype=float32>, <tf.Tensor 'split_:2' shape=(?, 1) dtype=float32>]
# LSTM cell
cell = tf.nn.rnn_cell.LSTMCell(num_neurons) #Or GRUCell(num_neurons)
output, state = tf.nn.dynamic_rnn(cell=cell, inputs=X, dtype=tf.float32)
output = tf.transpose(output, [1, 0, 2])
last = tf.gather(output, int(output.get_shape()[0]) - 1)
return tf.matmul(last, weights['out']) + biases['out']
我们用 pred = lstm_model(x, weights, biases)
实例化 LSTM 模型,我得到以下信息:
---> output, state = tf.nn.dynamic_rnn(cell=cell, inputs=X, dtype=tf.float32)
ValueError: Dimension must be 2 but is 3 for 'transpose_42' (op: 'Transpose') with input shapes: [?,1], [3]
1)你知道问题出在哪里吗?
2) 将 LSTM 输出乘以权重会产生回归吗?
如评论中所述,tf.nn.dynamic_rnn(cell, inputs, ...)
函数需要一个三维张量列表 * 作为其 inputs
参数,其中维度是默认解释为 batch_size
x num_timesteps
x num_features
。 (如果你传递 time_major=True
,它们被解释为 num_timesteps
x batch_size
x num_features
。)因此你在原始占位符中所做的预处理是不必要的,你可以将 oriding X
值直接传递给 tf.nn.dynamic_rnn()
.
* 技术上除了列表还可以接受复杂的嵌套结构,但是叶子元素必须是三维张量。**
** 对此进行调查后发现 tf.nn.dynamic_rnn()
的实现存在错误。原则上,输入至少有两个维度就足够了,但是 time_major=False
路径在将输入转置为时间主要形式时假定它们恰好具有三个维度,这是错误消息这个错误无意中导致出现在你的程序中。我们正在努力解决这个问题。
我想建立一个用于回归的玩具 LSTM 模型。 This 不错的教程对于初学者来说已经太复杂了。
给定一个长度为 time_steps
的序列,预测下一个值。考虑 time_steps=3
和序列:
array([
[[ 1.],
[ 2.],
[ 3.]],
[[ 2.],
[ 3.],
[ 4.]],
...
目标值应该是:
array([ 4., 5., ...
我定义如下模型:
# Network Parameters
time_steps = 3
num_neurons= 64 #(arbitrary)
n_features = 1
# tf Graph input
x = tf.placeholder("float", [None, time_steps, n_features])
y = tf.placeholder("float", [None, 1])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, 1]))
}
biases = {
'out': tf.Variable(tf.random_normal([1]))
}
#LSTM model
def lstm_model(X, weights, biases, learning_rate=0.01, optimizer='Adagrad'):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, time_steps, n_features)
# Required shape: 'time_steps' tensors list of shape (batch_size, n_features)
# Permuting batch_size and time_steps
input dimension: Tensor("Placeholder_:0", shape=(?, 3, 1), dtype=float32)
X = tf.transpose(X, [1, 0, 2])
transposed dimension: Tensor("transpose_41:0", shape=(3, ?, 1), dtype=float32)
# Reshaping to (time_steps*batch_size, n_features)
X = tf.reshape(X, [-1, n_features])
reshaped dimension: Tensor("Reshape_:0", shape=(?, 1), dtype=float32)
# Split to get a list of 'time_steps' tensors of shape (batch_size, n_features)
X = tf.split(0, time_steps, X)
splitted dimension: [<tf.Tensor 'split_:0' shape=(?, 1) dtype=float32>, <tf.Tensor 'split_:1' shape=(?, 1) dtype=float32>, <tf.Tensor 'split_:2' shape=(?, 1) dtype=float32>]
# LSTM cell
cell = tf.nn.rnn_cell.LSTMCell(num_neurons) #Or GRUCell(num_neurons)
output, state = tf.nn.dynamic_rnn(cell=cell, inputs=X, dtype=tf.float32)
output = tf.transpose(output, [1, 0, 2])
last = tf.gather(output, int(output.get_shape()[0]) - 1)
return tf.matmul(last, weights['out']) + biases['out']
我们用 pred = lstm_model(x, weights, biases)
实例化 LSTM 模型,我得到以下信息:
---> output, state = tf.nn.dynamic_rnn(cell=cell, inputs=X, dtype=tf.float32)
ValueError: Dimension must be 2 but is 3 for 'transpose_42' (op: 'Transpose') with input shapes: [?,1], [3]
1)你知道问题出在哪里吗?
2) 将 LSTM 输出乘以权重会产生回归吗?
如评论中所述,tf.nn.dynamic_rnn(cell, inputs, ...)
函数需要一个三维张量列表 * 作为其 inputs
参数,其中维度是默认解释为 batch_size
x num_timesteps
x num_features
。 (如果你传递 time_major=True
,它们被解释为 num_timesteps
x batch_size
x num_features
。)因此你在原始占位符中所做的预处理是不必要的,你可以将 oriding X
值直接传递给 tf.nn.dynamic_rnn()
.
* 技术上除了列表还可以接受复杂的嵌套结构,但是叶子元素必须是三维张量。**
** 对此进行调查后发现 tf.nn.dynamic_rnn()
的实现存在错误。原则上,输入至少有两个维度就足够了,但是 time_major=False
路径在将输入转置为时间主要形式时假定它们恰好具有三个维度,这是错误消息这个错误无意中导致出现在你的程序中。我们正在努力解决这个问题。