可视化 theano 卷积 MLP 中每一层的输出
Visualize output of each layer in theano Convolutional MLP
我正在阅读Convolutional Neural Networks tutorial。我想在训练模型后可视化每一层的输出。例如,在函数 "evaluate_lenet5" 中,我想将一个实例(这是一个图像)传递给网络,并查看每个层的输出以及为输入设置训练神经网络的 class。我认为在图像和每一层的权重向量上做点积可能很容易,但它根本不起作用。
我有每一层的对象:
# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
那么你能建议一种方法来可视化神经网络训练后逐步处理图像的示例吗?
这并不难。
如果您使用来自 theano 深度学习教程的 LeNetConvPoolLayer 的相同 class 定义,那么您只需要编译一个函数,将 x
作为输入并 [LayerObject].output
作为输出(其中 LayerObject 可以是任何图层对象,例如 layer0、layer1 等。无论您想可视化哪一层。
vis_layer1 = function([x], [layer1.output])
传递一个(或多个)样本(正是您在训练时输入张量的方式),您将获得为您的函数编译的特定层的输出。
注意: 通过这种方式,您将获得与模型在计算中使用的 shape 完全相同的输出。但是,您可以根据需要 reshape 通过重塑输出变量 layer1.output.flatten(n)
.
我正在阅读Convolutional Neural Networks tutorial。我想在训练模型后可视化每一层的输出。例如,在函数 "evaluate_lenet5" 中,我想将一个实例(这是一个图像)传递给网络,并查看每个层的输出以及为输入设置训练神经网络的 class。我认为在图像和每一层的权重向量上做点积可能很容易,但它根本不起作用。
我有每一层的对象:
# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
那么你能建议一种方法来可视化神经网络训练后逐步处理图像的示例吗?
这并不难。
如果您使用来自 theano 深度学习教程的 LeNetConvPoolLayer 的相同 class 定义,那么您只需要编译一个函数,将 x
作为输入并 [LayerObject].output
作为输出(其中 LayerObject 可以是任何图层对象,例如 layer0、layer1 等。无论您想可视化哪一层。
vis_layer1 = function([x], [layer1.output])
传递一个(或多个)样本(正是您在训练时输入张量的方式),您将获得为您的函数编译的特定层的输出。
注意: 通过这种方式,您将获得与模型在计算中使用的 shape 完全相同的输出。但是,您可以根据需要 reshape 通过重塑输出变量 layer1.output.flatten(n)
.