建立一个 CNN 模型但遭受 'Tensor-typed variable initializers must either be wrapped in an init_scope or callable'

Building a CNN model but suffer from 'Tensor-typed variable initializers must either be wrapped in an init_scope or callable'

我想在构建 CNN 模型时使用 he_normal 作为内核初始化器,但是遇到这个错误代码并且找不到解决方案。有什么建议吗? 尽我所能搜索但仍然无法解决此问题。 任何建议将不胜感激!

initializer=initializers.he_uniform()
initializer

model = Sequential()
# 1st layer:convolution
#input size:pixel*channel=(224*224)*3
#output size:pixel*filter_num=(224*224)*32
#parameter:kernel size*channel*filter_num=(3*3)*3*32+32
model.add(Conv2D(32, (3, 3), padding='same',input_shape=X_train.shape[1:]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))

# 2nd layer:convolution + pooling
model.add(Conv2D(32, (3, 3),kernel_initializer=initializer))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

# 3rd layer:convolution + pooling
model.add(Conv2D(64, (3, 3), padding='same',kernel_initializer=initializer))
model.add(BatchNormalization())
model.add(Activation('relu'))

# 4th layer:convolution + pooling
model.add(Conv2D(64, (3, 3),kernel_initializer=initializer))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

# 5th layer -covolution + pooling
model.add(Conv2D(128, (3, 3),kernel_initializer=initializer))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

# 6th~8th layer(MLP) : flatten(FC) + hidden(512) + output(15)
model.add(Flatten())
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(numclass))
model.add(Activation('softmax'))
model.summary()
kernel_initializer='he_uniform'

这应该有效