CNN 有多少个隐藏层?
How many hidden layers a CNN has?
我使用 CNN 解决分类问题。模型架构代码如下:
model.add(Conv1D(256, 5,padding='same',
input_shape=(40,1)))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same'))
model.add(Activation('relu'))
model.add(Dropout(0.1))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(8))
model.add(Activation('softmax'))
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)
这个模型有多少隐藏层?还有哪个是输出层和输入层?
第一层是输入层,最后一层是输出层。介于这两者之间的是隐藏层。
model.add(Conv1D(256, 5,padding='same', input_shape=(40,1))) # input layer
model.add(Activation('relu')) # hidden layer
model.add(Conv1D(128, 5,padding='same')) # hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Dropout(0.1)) # hidden layer
model.add(MaxPooling1D(pool_size=(8))) # hidden layer
model.add(Conv1D(128, 5,padding='same',)) # hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Conv1D(128, 5,padding='same',)) #hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Flatten()) # hidden layer
model.add(Dense(8)) # hidden layer
model.add(Activation('softmax')) # output layer
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)
输入层为第一层(指定input_shape的那一层)。每次使用 model.add 都会创建一个新图层。您可以使用 model.summary() 打印出您的模型层结构,如下所示。
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_20 (Conv1D) (None, 40, 256) 1536
_________________________________________________________________
activation_23 (Activation) (None, 40, 256) 0
_________________________________________________________________
conv1d_21 (Conv1D) (None, 40, 128) 163968
_________________________________________________________________
activation_24 (Activation) (None, 40, 128) 0
_________________________________________________________________
dropout_6 (Dropout) (None, 40, 128) 0
_________________________________________________________________
max_pooling1d_4 (MaxPooling1 (None, 5, 128) 0
_________________________________________________________________
conv1d_22 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
activation_25 (Activation) (None, 5, 128) 0
_________________________________________________________________
conv1d_23 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
activation_26 (Activation) (None, 5, 128) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 640) 0
_________________________________________________________________
dense_3 (Dense) (None, 8) 5128
_________________________________________________________________
activation_27 (Activation) (None, 8) 0
=================================================================
Total params: 334,728
Trainable params: 334,728
Non-trainable params: 0
这可能有点令人困惑,因为您的实际输出层是具有 8 个节点和 softmax 激活函数的层。我更喜欢按如下方式创建模型
inputs = tf.keras.Input(shape=(40,1))
x = tf.keras.layers.Conv1D(256, 5,padding='same', activation='relu')(inputs)
x=Dropout(.1)(x)
x=MaxPooling1D(pool_size=(8))(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Flatten()(x)
outputs=Dense(8, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
It is the exact same model but I think it is clearer as to what layer is the actual output
See result below for model.summary()
> Blockquote
Model: "model_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) [(None, 40, 1)] 0
_________________________________________________________________
conv1d_44 (Conv1D) (None, 40, 256) 1536
_________________________________________________________________
dropout_15 (Dropout) (None, 40, 256) 0
_________________________________________________________________
max_pooling1d_12 (MaxPooling (None, 5, 256) 0
_________________________________________________________________
conv1d_45 (Conv1D) (None, 5, 128) 163968
_________________________________________________________________
conv1d_46 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
conv1d_47 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
flatten_11 (Flatten) (None, 640) 0
_________________________________________________________________
dense_12 (Dense) (None, 8) 5128
=================================================================
Total params: 334,728
Trainable params: 334,728
Non-trainable params: 0
我使用 CNN 解决分类问题。模型架构代码如下:
model.add(Conv1D(256, 5,padding='same',
input_shape=(40,1)))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same'))
model.add(Activation('relu'))
model.add(Dropout(0.1))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(8))
model.add(Activation('softmax'))
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)
这个模型有多少隐藏层?还有哪个是输出层和输入层?
第一层是输入层,最后一层是输出层。介于这两者之间的是隐藏层。
model.add(Conv1D(256, 5,padding='same', input_shape=(40,1))) # input layer
model.add(Activation('relu')) # hidden layer
model.add(Conv1D(128, 5,padding='same')) # hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Dropout(0.1)) # hidden layer
model.add(MaxPooling1D(pool_size=(8))) # hidden layer
model.add(Conv1D(128, 5,padding='same',)) # hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Conv1D(128, 5,padding='same',)) #hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Flatten()) # hidden layer
model.add(Dense(8)) # hidden layer
model.add(Activation('softmax')) # output layer
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)
输入层为第一层(指定input_shape的那一层)。每次使用 model.add 都会创建一个新图层。您可以使用 model.summary() 打印出您的模型层结构,如下所示。
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_20 (Conv1D) (None, 40, 256) 1536
_________________________________________________________________
activation_23 (Activation) (None, 40, 256) 0
_________________________________________________________________
conv1d_21 (Conv1D) (None, 40, 128) 163968
_________________________________________________________________
activation_24 (Activation) (None, 40, 128) 0
_________________________________________________________________
dropout_6 (Dropout) (None, 40, 128) 0
_________________________________________________________________
max_pooling1d_4 (MaxPooling1 (None, 5, 128) 0
_________________________________________________________________
conv1d_22 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
activation_25 (Activation) (None, 5, 128) 0
_________________________________________________________________
conv1d_23 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
activation_26 (Activation) (None, 5, 128) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 640) 0
_________________________________________________________________
dense_3 (Dense) (None, 8) 5128
_________________________________________________________________
activation_27 (Activation) (None, 8) 0
=================================================================
Total params: 334,728
Trainable params: 334,728
Non-trainable params: 0
这可能有点令人困惑,因为您的实际输出层是具有 8 个节点和 softmax 激活函数的层。我更喜欢按如下方式创建模型
inputs = tf.keras.Input(shape=(40,1))
x = tf.keras.layers.Conv1D(256, 5,padding='same', activation='relu')(inputs)
x=Dropout(.1)(x)
x=MaxPooling1D(pool_size=(8))(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Flatten()(x)
outputs=Dense(8, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
It is the exact same model but I think it is clearer as to what layer is the actual output
See result below for model.summary()
> Blockquote
Model: "model_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) [(None, 40, 1)] 0
_________________________________________________________________
conv1d_44 (Conv1D) (None, 40, 256) 1536
_________________________________________________________________
dropout_15 (Dropout) (None, 40, 256) 0
_________________________________________________________________
max_pooling1d_12 (MaxPooling (None, 5, 256) 0
_________________________________________________________________
conv1d_45 (Conv1D) (None, 5, 128) 163968
_________________________________________________________________
conv1d_46 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
conv1d_47 (Conv1D) (None, 5, 128) 82048
_________________________________________________________________
flatten_11 (Flatten) (None, 640) 0
_________________________________________________________________
dense_12 (Dense) (None, 8) 5128
=================================================================
Total params: 334,728
Trainable params: 334,728
Non-trainable params: 0