可重用的 Tensorflow 卷积网络

A reusable Tensorflow convolutional Network

我想重用 Tensorflow "MNIST for Pros" CNN example 中的代码。 我的图像是 388px X 191px,只有 2 个输出 类。原码可以是found here。 我试图通过仅更改 输入和输出层 来重用此代码,如下所示:

输入层

x = tf.placeholder("float", shape=[None, 74108])

y_ = tf.placeholder("float", shape=[None, 2])

x_image = tf.reshape(x, [-1,388,191,1])

输出层

W_fc2 = weight_variable([1024, 2])

b_fc2 = bias_variable([2])

运行 修改后的代码给出了模糊的堆栈跟踪:

W tensorflow/core/common_runtime/executor.cc:1027] 0x2136510 Compute status: Invalid argument: Input has 14005248 values, which isn't divisible by 3136
     [[Node: Reshape_4 = Reshape[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](MaxPool_5, Reshape_4/shape)]]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1267, in run
    _run_using_default_session(self, feed_dict, self.graph, session)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2763, in _run_using_default_session
    session.run(operation, feed_dict)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 345, in run
    results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 419, in _do_run
    e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Input has 14005248 values, which isn't divisible by 3136
     [[Node: Reshape_4 = Reshape[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](MaxPool_5, Reshape_4/shape)]]
Caused by op u'Reshape_4', defined at:
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 554, in reshape
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 988, in __init__
    self._traceback = _extract_stack()
tensorflow.python.framework.errors.InvalidArgumentError: Input has 14005248 values, which isn't divisible by 3136
 [[Node: Reshape_4 = Reshape[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](MaxPool_5, Reshape_4/shape)]]

但是您执行它的方式使您无法看到导致问题的实际行。将其保存到文件中并 python <file> 它。

  File "<stdin>", line 1, in <module>

但答案是你没有改变卷积层和池化层的大小,所以当你过去 运行 28x28 图像通过时,它们最终会 sh运行k 缩小到7x7x(convolutional_depth) 层。现在你正在 运行 处理巨大的图像,所以在第一个卷积层和 2x2 maxpool 之后,你有一个非常大的东西你试图输入,但你正在重塑为:

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2 的输出对于更大的图像更大。您需要更多地缩小它们 - 可能需要更多的卷积层和最大池化层。您也可以尝试增加 W_fc1 的大小以匹配到达那里的输入大小。它 运行 通过两个 2x2 最大池 - 每个在 x 和 y 维度上将大小缩小 2。 28x28x1 --> 14x14x32 --> 7x7x64。所以您的图像将从 388 x 191 --> 194 x 95 --> 97 x 47

作为警告,具有 97*47 = 4559 个输入的全连接层将非常缓慢。