检查tensorflow中每一层的结果,(梯形网络)

Check result of each layer in tensorflow, (ladder network)

我正在使用用 https://github.com/rinuboney/ladder 编写的具有不同类型输入的 tensorflow 的 tensorflow 实现。

我的输入样本很少(大约 1000 个),但特征太多(大约 20200 个)。 如果可能的话,我想在每一层之后检查幸存的特征。 或者有什么方法可以检查 softmax 层之前的输出?

这就是您获得激活的方式。通常你可以调用 sess.run() 用你通过名称获得的操作,但由于层在一个循环中,我无法弄清楚如何通过名称获得它们,所以我写了函数 returns激活次数:

    def get_activation(inputs,layer): #E.g. for last layer put L-1
      h = inputs + tf.random_normal(tf.shape(inputs)) * noise_std #Clean input if the noise std is set to zero
      d = {} #Store normalized preactivation z_l, h_l, mean, std

      #Initialize the dictionary that stores the data. Note that the data is stored seperately
      #The speration is because we still want to know for which examples we have the labels
      d['labeled'] = {'z': {}, 'm': {}, 'v': {}, 'h': {}}
      d['unlabeled'] = {'z': {}, 'm': {}, 'v': {}, 'h': {}}

      #Initialize the lowest layer with h. We do not have a transformation there.
      d['labeled']['z'][0], d['unlabeled']['z'][0] = split_lu(h)

      #Loop through all the layers. Doing forward propagation and updating the values we need to keep track of.
      for l in range(1, L+1): #Max. index: L

          print("Layer %s: ,%s -> %s" % (l,architecture[l-1],architecture[l]))
          #Split the data that was joined before
          d['labeled']['h'][l-1], d['unlabeled']['h'][l-1] = split_lu(h)
          #Calculate the preactivation
          z_pre = tf.matmul(h, weights['W'][l-1])
          #Split into labeled and unlabeled examples
          z_pre_l, z_pre_u = split_lu(z_pre)
          #Caculate the mean and variance of the unlabeled examples, this is needed in the decoder phase when normalizing the 
          m, v = tf.nn.moments(z_pre_u, axes=[0])
          m_l, v_l = tf.nn.moments(z_pre_l, axes=[0])

          #If we are traning, use the training batch norm (we also have labeled data)
          mean = ewma.average(running_mean[l-1])
          var = ewma.average(running_var[l-1])
          z = batch_normalization(z_pre, mean, var) 

          if l == L:
            #Convert z and apply softmax for the last layer.
            h = tf.nn.softmax(weights['gamma'][l-1] * (z+weights['beta'][l-1]))
            return h
          elif l == layer:
            h = tf.nn.relu(z + weights['beta'][l-1])
            return h
          else:
            h = tf.nn.relu(z + weights['beta'][l-1])




    last_layer_activation = get_activation(inputs, L-1)
    last_layer_activation = tf.identity(last_layer_activation, name='last_layer_activation')