具有不同大小图像的 Tensorflow 卷积神经网络

Tensorflow Convolution Neural Network with different sized images

我正在尝试创建一个可以对图像中的每个像素进行分类的深度 CNN。我正在从 this 论文中的下图复制架构。在论文中提到使用反卷积使得任何大小的输入都是可能的。这可以在下图中看到。

Github Repository

目前,我已将我的模型硬编码为接受尺寸为 32x32x7 的图像,但我想接受任何尺寸的输入。 我需要对我的代码进行哪些更改才能接受可变大小的输入?

 x = tf.placeholder(tf.float32, shape=[None, 32*32*7])
 y_ = tf.placeholder(tf.float32, shape=[None, 32*32*7, 3])
 ...
 DeConnv1 = tf.nn.conv3d_transpose(layer1, filter = w, output_shape = [1,32,32,7,1], strides = [1,2,2,2,1], padding = 'SAME')
 ...
 final = tf.reshape(final, [1, 32*32*7])
 W_final = weight_variable([32*32*7,32*32*7,3])
 b_final = bias_variable([32*32*7,3])
 final_conv = tf.tensordot(final, W_final, axes=[[1], [1]]) + b_final

理论上是可以的。您需要将输入和标签图像占位符的图像大小设置为 none,并让图形从输入数据动态推断图像大小。

但是,定义图形时必须小心。需要使用 tf.shape 而不是 tf.get_shape()。前者只有当你session.run时才动态推断出形状,后者在你定义图形时才能得到形状。但是当输入大小设置为 none 时,后者不会得到真正的重塑(可能只是 return None)。

让事情变得复杂,如果你使用 tf.layers.conv2dupconv2d,有时这些高级函数不喜欢 tf.shape,因为它们似乎假定形状信息可用在图构建过程中。

我希望我有更好的工作示例来说明以上几点。我会把这个答案作为占位符,如果有机会,我会回来添加更多内容。

动态占位符

Tensorflow 允许在占位符中包含 多个 动态(a.k.a。None)维度。构建图形时引擎将无法确保正确性,因此客户端负责提供正确的输入,但它提供了很大的灵活性。

所以我要从...

x = tf.placeholder(tf.float32, shape=[None, N*M*P])
y_ = tf.placeholder(tf.float32, shape=[None, N*M*P, 3])
...
x_image = tf.reshape(x, [-1, N, M, P, 1])

到...

# Nearly all dimensions are dynamic
x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])

既然您打算将输入重塑为 5D,那么为什么不从一开始就在 x_image 中使用 5D。此时,label的第二个维度是任意的,但我们承诺 tensorflow,它将匹配x_image

反卷积中的动态形状

接下来,tf.nn.conv3d_transpose 的好处是它的输出形状可以是动态的。所以不是这个:

# Hard-coded output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=[1,32,32,7,1], ...)

...你可以这样做:

# Dynamic output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=tf.shape(x_image), ...)

这样转置卷积可以应用于任何图像,结果将采用在运行时实际传入的x_image的形状。

请注意 x_image 的静态形状是 (?, ?, ?, ?, 1)

全卷积网络

拼图的最后也是最重要的一块是使 整个网络 卷积,这也包括最后的密集层。密集层 必须 静态定义其尺寸,这会强制整个神经网络固定输入图像尺寸。

对我们来说幸运的是,Springenberg 等人在 "Striving for Simplicity: The All Convolutional Net" paper. I'm going to use a convolution with 3 1x1x1 filters (see also this question 中描述了一种用 CONV 层替换 FC 层的方法:

final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])

如果我们确保 final 具有与 DeConnv1(和其他)相同的尺寸,它将使 y 成为我们想要的形状:[-1, N * M * P, 3]

将它们结合在一起

您的网络非常大,但所有反卷积基本上都遵循相同的模式,因此我将我的 概念验证 代码简化为只有一个反卷积。目标只是展示什么样的网络能够处理任意大小的图像。最后说明:图像尺寸可以在 批次之间 变化,但在一个批次内它们必须相同。

完整代码:

sess = tf.InteractiveSession()

def conv3d_dilation(tempX, tempFilter):
  return tf.layers.conv3d(tempX, filters=tempFilter, kernel_size=[3, 3, 1], strides=1, padding='SAME', dilation_rate=2)

def conv3d(tempX, tempW):
  return tf.nn.conv3d(tempX, tempW, strides=[1, 2, 2, 2, 1], padding='SAME')

def conv3d_s1(tempX, tempW):
  return tf.nn.conv3d(tempX, tempW, strides=[1, 1, 1, 1, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def max_pool_3x3(x):
  return tf.nn.max_pool3d(x, ksize=[1, 3, 3, 3, 1], strides=[1, 2, 2, 2, 1], padding='SAME')

x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])

W_conv1 = weight_variable([3, 3, 1, 1, 32])
h_conv1 = conv3d(x_image, W_conv1)
# second convolution
W_conv2 = weight_variable([3, 3, 4, 32, 64])
h_conv2 = conv3d_s1(h_conv1, W_conv2)
# third convolution path 1
W_conv3_A = weight_variable([1, 1, 1, 64, 64])
h_conv3_A = conv3d_s1(h_conv2, W_conv3_A)
# third convolution path 2
W_conv3_B = weight_variable([1, 1, 1, 64, 64])
h_conv3_B = conv3d_s1(h_conv2, W_conv3_B)
# fourth convolution path 1
W_conv4_A = weight_variable([3, 3, 1, 64, 96])
h_conv4_A = conv3d_s1(h_conv3_A, W_conv4_A)
# fourth convolution path 2
W_conv4_B = weight_variable([1, 7, 1, 64, 64])
h_conv4_B = conv3d_s1(h_conv3_B, W_conv4_B)
# fifth convolution path 2
W_conv5_B = weight_variable([1, 7, 1, 64, 64])
h_conv5_B = conv3d_s1(h_conv4_B, W_conv5_B)
# sixth convolution path 2
W_conv6_B = weight_variable([3, 3, 1, 64, 96])
h_conv6_B = conv3d_s1(h_conv5_B, W_conv6_B)
# concatenation
layer1 = tf.concat([h_conv4_A, h_conv6_B], 4)
w = tf.Variable(tf.constant(1., shape=[2, 2, 4, 1, 192]))
DeConnv1 = tf.nn.conv3d_transpose(layer1, filter=w, output_shape=tf.shape(x_image), strides=[1, 2, 2, 2, 1], padding='SAME')

final = DeConnv1
final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=label, logits=y))

print('x_image:', x_image)
print('DeConnv1:', DeConnv1)
print('final_conv:', final_conv)

def try_image(N, M, P, B=1):
  batch_x = np.random.normal(size=[B, N, M, P, 1])
  batch_y = np.ones([B, N * M * P, 3]) / 3.0

  deconv_val, final_conv_val, loss = sess.run([DeConnv1, final_conv, cross_entropy],
                                              feed_dict={x_image: batch_x, label: batch_y})
  print(deconv_val.shape)
  print(final_conv.shape)
  print(loss)
  print()

tf.global_variables_initializer().run()
try_image(32, 32, 7)
try_image(16, 16, 3)
try_image(16, 16, 3, 2)