TensorFlow error: TensorShape() must have the same rank

TensorFlow error: TensorShape() must have the same rank

def compileActivation(self, net, layerNum):
    variable = net.x if layerNum == 0 else net.varArrayA[layerNum - 1]
    #print tf.expand_dims(net.dropOutVectors[layerNum], 1)

    #print net.varWeights[layerNum]['w'].get_shape().as_list()

    z = tf.matmul((net.varWeights[layerNum]['w']), (variable * (tf.expand_dims(net.dropOutVectors[layerNum], 1) if self.dropout else 1.0))) + tf.expand_dims(net.varWeights[layerNum]['b'], 1)

    a = self.activation(z, self.pool_size)
    net.varArrayA.append(a)

我是 运行 一个激活函数,它计算 z 并将其传递给 sigmoid 激活函数。 当我尝试执行上述功能时,出现以下错误:

ValueError: Shapes TensorShape([Dimension(-2)]) and TensorShape([Dimension(None), Dimension(None)]) must have the same rank

用于计算 z 的 theano 等价物工作得很好:

z = T.dot(net.varWeights[layerNum]['w'], variable * (net.dropOutVectors[layerNum].dimshuffle(0, 'x') if self.dropout else 1.0)) + net.varWeights[layerNum]['b'].dimshuffle(0, 'x')

米希尔,

当我遇到这个问题时,是因为我的 Feed 字典中的占位符大小不正确。 另外你应该知道如何 运行 会话中的图表. tf.Session.run(fetches, feed_dict=None)

这是我制作 placeholders

的代码
# Note this place holder is for the input data feed-dict definition
input_placeholder = tf.placeholder(tf.float32, shape=(batch_size, FLAGS.InputLayer))
# Not sure yet what this will be used for. 
desired_output_placeholder = tf.placeholder(tf.float32, shape=(batch_size, FLAGS.OutputLayer))

这是我的填充提要词典功能:

def feel_feed_funct(data_sets_train, input_pl, output_pl):
  ti_feed, dto_feed = data_sets_train.next_batch(FLAGS.batch_size)

  feed_dict = {
    input_pl: ti_feed,
    output_pl: dto_feed
  }
  return feed_dict

稍后我这样做:

# Fill a feed dictionary with the actual set of images and labels
# for this particular training step.
feed_dict = fill_feed_dict(data_sets.train, input_placeholder, desired_output_placeholder)

然后到 运行 会话并获取输出我有这一行

_, l = sess.run([train_op, loss], feed_dict=feed_dict)