Reshaping tensor after max pooling ValueError: Shapes are not compatible
Reshaping tensor after max pooling ValueError: Shapes are not compatible
我正在构建适合我自己数据的 CNN,基于 this example
基本上,我的数据有3640个特征;我有一个卷积层,然后是一个池化层,它池化了所有其他特征,所以我最终得到了尺寸 (?, 1, 1819, 1) 因为在卷积层之后有 3638 个特征 / 2 == 1819。
当我尝试在合并后重塑我的数据以获取 [n_samples、n_fetures]
形式时
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
pool.set_shape([pool_shape[0], pool_shape[2]*pool_shape[3]])
y.set_shape([y_shape[0], 1])
我收到一个错误:
ValueError: Shapes (?, 1, 1819, 10) and (?, 18190) are not compatible
我的代码:
N_FEATURES = 140*26
N_FILTERS = 1
WINDOW_SIZE = 3
def my_conv_model(x, y):
x = tf.cast(x, tf.float32)
y = tf.cast(y, tf.float32)
print("x ", x.get_shape())
print("y ", y.get_shape())
# to form a 4d tensor of shape batch_size x 1 x N_FEATURES x 1
x = tf.reshape(x, [-1, 1, N_FEATURES, 1])
# this will give you sliding window of 1 x WINDOW_SIZE convolution.
features = tf.contrib.layers.convolution2d(inputs=x,
num_outputs=N_FILTERS,
kernel_size=[1, WINDOW_SIZE],
padding='VALID')
print("features ", features.get_shape()) #features (?, 1, 3638, 10)
# Max pooling across output of Convolution+Relu.
pool = tf.nn.max_pool(features, ksize=[1, 1, 2, 1],
strides=[1, 1, 2, 1], padding='SAME')
pool_shape = pool.get_shape()
y_shape = y.get_shape()
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
### here comes the error ###
pool.set_shape([pool_shape[0], pool_shape[2]*pool_shape[3]])
y.set_shape([y_shape[0], 1])
pool_shape = pool.get_shape()
y_shape = y.get_shape()
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
prediction, loss = learn.models.logistic_regression(pool, y)
return prediction, loss
如何重塑数据以获得任何有意义的表示,然后将其传递给逻辑回归层?
这看起来像是 Tensor.set_shape()
method and the tf.reshape()
运算符之间的混淆。在这种情况下,您应该使用 tf.reshape()
因为您正在更改 pool
和 y
张量的形状:
tf.reshape(tensor, shape)
运算符采用任意形状的 tensor
和 returns 具有给定 shape
的张量,只要它们具有相同数量的元素。此运算符应用于更改 输入张量的形状。
tensor.set_shape(shape)
方法采用可能具有部分已知或未知形状的 tensor
,并向 TensorFlow 断言它实际上具有给定的 shape
。此方法应用于提供有关特定张量形状的更多信息。
可以使用它,例如,当您获取具有数据相关输出形状(例如 tf.image.decode_jpeg()
)的运算符的输出并断言它具有静态形状(例如,基于知识关于数据集中图像的大小)。
在您的程序中,您应该将对 set_shape()
的调用替换为如下内容:
pool_shape = tf.shape(pool)
pool = tf.reshape(pool, [pool_shape[0], pool_shape[2] * pool_shape[3]])
y_shape = tf.shape(y)
y = tf.reshape(y, [y_shape[0], 1])
# Or, more straightforwardly:
y = tf.expand_dims(y, 1)
我正在构建适合我自己数据的 CNN,基于 this example
基本上,我的数据有3640个特征;我有一个卷积层,然后是一个池化层,它池化了所有其他特征,所以我最终得到了尺寸 (?, 1, 1819, 1) 因为在卷积层之后有 3638 个特征 / 2 == 1819。
当我尝试在合并后重塑我的数据以获取 [n_samples、n_fetures]
形式时 print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
pool.set_shape([pool_shape[0], pool_shape[2]*pool_shape[3]])
y.set_shape([y_shape[0], 1])
我收到一个错误:
ValueError: Shapes (?, 1, 1819, 10) and (?, 18190) are not compatible
我的代码:
N_FEATURES = 140*26
N_FILTERS = 1
WINDOW_SIZE = 3
def my_conv_model(x, y):
x = tf.cast(x, tf.float32)
y = tf.cast(y, tf.float32)
print("x ", x.get_shape())
print("y ", y.get_shape())
# to form a 4d tensor of shape batch_size x 1 x N_FEATURES x 1
x = tf.reshape(x, [-1, 1, N_FEATURES, 1])
# this will give you sliding window of 1 x WINDOW_SIZE convolution.
features = tf.contrib.layers.convolution2d(inputs=x,
num_outputs=N_FILTERS,
kernel_size=[1, WINDOW_SIZE],
padding='VALID')
print("features ", features.get_shape()) #features (?, 1, 3638, 10)
# Max pooling across output of Convolution+Relu.
pool = tf.nn.max_pool(features, ksize=[1, 1, 2, 1],
strides=[1, 1, 2, 1], padding='SAME')
pool_shape = pool.get_shape()
y_shape = y.get_shape()
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
### here comes the error ###
pool.set_shape([pool_shape[0], pool_shape[2]*pool_shape[3]])
y.set_shape([y_shape[0], 1])
pool_shape = pool.get_shape()
y_shape = y.get_shape()
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
prediction, loss = learn.models.logistic_regression(pool, y)
return prediction, loss
如何重塑数据以获得任何有意义的表示,然后将其传递给逻辑回归层?
这看起来像是 Tensor.set_shape()
method and the tf.reshape()
运算符之间的混淆。在这种情况下,您应该使用 tf.reshape()
因为您正在更改 pool
和 y
张量的形状:
tf.reshape(tensor, shape)
运算符采用任意形状的tensor
和 returns 具有给定shape
的张量,只要它们具有相同数量的元素。此运算符应用于更改 输入张量的形状。tensor.set_shape(shape)
方法采用可能具有部分已知或未知形状的tensor
,并向 TensorFlow 断言它实际上具有给定的shape
。此方法应用于提供有关特定张量形状的更多信息。可以使用它,例如,当您获取具有数据相关输出形状(例如
tf.image.decode_jpeg()
)的运算符的输出并断言它具有静态形状(例如,基于知识关于数据集中图像的大小)。
在您的程序中,您应该将对 set_shape()
的调用替换为如下内容:
pool_shape = tf.shape(pool)
pool = tf.reshape(pool, [pool_shape[0], pool_shape[2] * pool_shape[3]])
y_shape = tf.shape(y)
y = tf.reshape(y, [y_shape[0], 1])
# Or, more straightforwardly:
y = tf.expand_dims(y, 1)