在 tflearn 中可视化 CNN 层或池化层

Visualize CNN layer or pooling layer in tflearn

有什么方法可以在 tflearn 中训练甚至测试时可视化 CNN 或池化层的输出?我看过 tensorflow 的可视化代码,但由于会话和 feeddict 涉及它们,我不断收到类似 "unhashable numpy.ndarray" 的错误,但我的图像尺寸相同所以我决定问是否有一种方法可以可视化输出任何层。下面是我的 tflearn 层代码:-

X_train, X_test, y_train, y_test=cross_validation.train_test_split(data,labels,test_size=0.1)

    tf.reset_default_graph()
    convnet=input_data(shape=[None,50,50,3],name='input')
    convnet=conv_2d(convnet,32,5,activation='relu')
    convnet=max_pool_2d(convnet,5)
    convnet=conv_2d(convnet,64,5,activation='relu')
    convnet=max_pool_2d(convnet,5)

    convnet=conv_2d(convnet,32,5,activation='relu')
    convnet=max_pool_2d(convnet,5)

    convnet=fully_connected(convnet,128,activation='relu')
    convnet=dropout(convnet,0.4)
    convnet=fully_connected(convnet,6,activation='softmax')
    convnet=regression(convnet,optimizer='adam',learning_rate=0.005,loss='categorical_crossentropy',name='MyClassifier')
    model=tflearn.DNN(convnet,tensorboard_dir='log',tensorboard_verbose=0)
    model.fit(X_train,y_train, n_epoch=20,validation_set=(X_test,y_test), snapshot_step=20,show_metric=True,run_id='MyClassifier')
    print("Saving the model")
    model.save('model.tflearn')

我如何在训练或测试正常工作的同时可视化任何层的输出?我所说的输出是指检测边缘或其他低级特征的失真图像。谢谢。

如前所述here,您可以通过简单地定义一个将观察层作为输出的新模型来查看中间层产生的输出。 首先,声明您的原始模型(但保留对您要观察的中间层的引用):

convnet = input_data(shape=[None, 50, 50, 3], name='input')
convnet = conv_2d(convnet, 32, 5, activation='relu')
max_0 = max_pool_2d(convnet, 5)
convnet = conv_2d(max_0, 64, 5, activation='relu')
max_1 = max_pool_2d(convnet, 5)
...
convnet = regression(...)
model = tflearn.DNN(...)
model.fit(...)

现在只需为每一层创建一个模型并预测输入数据:

observed = [max_0, max_1, max_2]
observers = [tflearn.DNN(v, session=model.session) for v in observed]
outputs = [m.predict(X_test) for m in observers]
print([d.shape for d in outputs])

它为您的模型输出以下评估的张量形状:

[(2, 10, 10, 32), (2, 2, 2, 64), (2, 1, 1, 32)]

有了这个,您将能够在测试期间查看输出。至于训练,也许你可以使用回调?

class PlottingCallback(tflearn.callbacks.Callback):
    def __init__(self, model, x,
                 layers_to_observe=(),
                 kernels=10,
                 inputs=1):
        self.model = model
        self.x = x
        self.kernels = kernels
        self.inputs = inputs
        self.observers = [tflearn.DNN(l) for l in layers_to_observe]

    def on_epoch_end(self, training_state):
        outputs = [o.predict(self.x) for o in self.observers]

        for i in range(self.inputs):
            plt.figure(frameon=False)
            plt.subplots_adjust(wspace=0.1, hspace=0.1)
            ix = 1
            for o in outputs:
                for kernel in range(self.kernels):
                    plt.subplot(len(outputs), self.kernels, ix)
                    plt.imshow(o[i, :, :, kernel])
                    plt.axis('off')
                    ix += 1
            plt.savefig('outputs-for-image:%i-at-epoch:%i.png'
                        % (i, training_state.epoch))

model.fit(X_train, y_train,
          ...
          callbacks=[PlottingCallback(model, X_test, (max_0, max_1, max_2))])

这将在您的磁盘上保存与此类似的图像,在每个时期: