我如何克服随着每层继续使用 Maxpooling 而导致输出形状减小的问题?
How can I overcome that my output shape decreases as I keep on Maxpooling per layer?
我正在构建一维卷积神经网络 (CNN)。从许多来源我了解到,如果添加更多层,CNN 的性能会提高。然而,在每个池化层,我的输出形状比我的输入小 50%(因为我使用的池大小为 2)。这意味着一旦我的输出具有形状 1,我就无法添加更多层。
有没有办法克服这个问题 'decreasing shape problem' 还是只是增加我的输入形状的问题?
I am building a 1D Convolutional Neural Network (CNN). From many sources I have understood that performance of the CNN increases if more layers are added.
这并不总是正确的。这通常取决于您拥有的数据和您要解决的任务。
引用 https://www.quora.com/Why-do-we-use-pooling-layer-in-convolutional-neural-networks
Pooling allows features to shift relative to each other resulting in robust matching of features even in the presence of small distortions. There are also many other benefits of doing pooling, like:
Reduces the spatial dimension of the feature map.
And hence also reducing the number of parameters high up the processing hierarchy. This simplifies the overall model complexity.
然后根据步幅、池大小和填充,您可能会自愿减少输出形状。
回到你的问题,如果你不想让你的形状变小,可以考虑使用strides=1和padding='same'.
我正在构建一维卷积神经网络 (CNN)。从许多来源我了解到,如果添加更多层,CNN 的性能会提高。然而,在每个池化层,我的输出形状比我的输入小 50%(因为我使用的池大小为 2)。这意味着一旦我的输出具有形状 1,我就无法添加更多层。
有没有办法克服这个问题 'decreasing shape problem' 还是只是增加我的输入形状的问题?
I am building a 1D Convolutional Neural Network (CNN). From many sources I have understood that performance of the CNN increases if more layers are added.
这并不总是正确的。这通常取决于您拥有的数据和您要解决的任务。
引用 https://www.quora.com/Why-do-we-use-pooling-layer-in-convolutional-neural-networks
Pooling allows features to shift relative to each other resulting in robust matching of features even in the presence of small distortions. There are also many other benefits of doing pooling, like: Reduces the spatial dimension of the feature map. And hence also reducing the number of parameters high up the processing hierarchy. This simplifies the overall model complexity.
然后根据步幅、池大小和填充,您可能会自愿减少输出形状。
回到你的问题,如果你不想让你的形状变小,可以考虑使用strides=1和padding='same'.