Grad-CAM visualization: Invalid Argument Error: You must feed a value for placeholder tensor 'X' with dtype float and shape [x]
Grad-CAM visualization: Invalid Argument Error: You must feed a value for placeholder tensor 'X' with dtype float and shape [x]
我正在尝试使用 CNN 可视化分类任务的重要区域。
我正在使用 VGG16 + 我自己的顶层(一个全局平均池化层和一个密集层)
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(100, 100, 3))
model = models.Sequential()
model.add(model_vgg16_conv)
model.add(Lambda(global_average_pooling, output_shape=global_average_pooling_shape))
model.add(Dense(4, activation = 'softmax', init='uniform'))
编译和拟合模型后,我正在尝试使用 Grad-CAM 制作新图像:
image = cv2.imread("data/example_images/test.jpg")
# Resize to 100x100
image = resize(image,(100,100),anti_aliasing=True, mode='constant')
# Because it's a grey scale image extend the dimensions
image = np.repeat(image.reshape(1,100, 100, 1), 3, axis=3)
class_weights = model.get_layer("dense_1").get_weights()[0]
final_conv_layer = model.get_layer("vgg16").get_layer("block5_conv3")
input1 = model.get_layer("vgg16").layers[0].input
output1 = model.get_layer("dense_1").output
get_output = K.function([input1], [final_conv_layer.output, output1])
之后我执行
[conv_outputs, predictions] = get_output([image])
导致以下错误:
InvalidArgumentError: You must feed a value for placeholder tensor 'vgg16_input' with dtype float and shape [?,100,100,3]
[[{{node vgg16_input}}]]
[[dense_1/Softmax/_233]]
附加信息
def global_average_pooling(x):
return K.mean(x, axis = (2, 3))
def global_average_pooling_shape(input_shape):
return input_shape[0:2]
模型摘要:
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 3, 3, 512) 14714688
_________________________________________________________________
lambda_1 (Lambda) (None, 3) 0
_________________________________________________________________
dense_1 (Dense) (None, 4) 16
=================================================================
Total params: 14,714,704
Trainable params: 16
Non-trainable params: 14,714,688
VGG-模型总结:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 100, 100, 3) 0
...
我是 Grad-CAM 的新手,我不确定我是否只是在监督某些事情,或者我是否误解了整个概念。
对于 Sequential,层是使用 add() 方法添加的。在这种情况下,由于模型对象是直接添加的,现在模型有两个输入 - 一个通过 Sequential,另一个通过 model_vgg16_conv。
>>> layer = model.layers[0]
>>> layer.get_input_at(0)
<tf.Tensor 'input_1:0' shape=(?, ?, ?, 3) dtype=float32>
>>> layer.get_input_at(1)
<tf.Tensor 'vgg16_input:0' shape=(?, ?, ?, 3) dtype=float32>
由于 K.function,只提供了一个输入,因此 'vgg16_input' 出现缺少输入的错误。这行得通,
get_output = K.function([input1] + [model.input], [final_conv_layer.output, output1])
[conv_outputs, predictions] = get_output([image, image])
但是函数 API 可以像这样在这种情况下使用:
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(100, 100, 3))
gavg = Lambda(global_average_pooling, output_shape=global_average_pooling_shape)(model_vgg16_conv.output)
output = Dense(4, activation = 'softmax', init='uniform')(gavg)
model_f = Model(model_vgg16_conv.input, output)
final_conv_layer = model_f.get_layer("block5_conv3")
get_output = K.function([model_f.input], [final_conv_layer.output, model_f.output])
[conv_outputs, predictions] = get_output([image])
我正在尝试使用 CNN 可视化分类任务的重要区域。
我正在使用 VGG16 + 我自己的顶层(一个全局平均池化层和一个密集层)
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(100, 100, 3))
model = models.Sequential()
model.add(model_vgg16_conv)
model.add(Lambda(global_average_pooling, output_shape=global_average_pooling_shape))
model.add(Dense(4, activation = 'softmax', init='uniform'))
编译和拟合模型后,我正在尝试使用 Grad-CAM 制作新图像:
image = cv2.imread("data/example_images/test.jpg")
# Resize to 100x100
image = resize(image,(100,100),anti_aliasing=True, mode='constant')
# Because it's a grey scale image extend the dimensions
image = np.repeat(image.reshape(1,100, 100, 1), 3, axis=3)
class_weights = model.get_layer("dense_1").get_weights()[0]
final_conv_layer = model.get_layer("vgg16").get_layer("block5_conv3")
input1 = model.get_layer("vgg16").layers[0].input
output1 = model.get_layer("dense_1").output
get_output = K.function([input1], [final_conv_layer.output, output1])
之后我执行
[conv_outputs, predictions] = get_output([image])
导致以下错误:
InvalidArgumentError: You must feed a value for placeholder tensor 'vgg16_input' with dtype float and shape [?,100,100,3] [[{{node vgg16_input}}]] [[dense_1/Softmax/_233]]
附加信息
def global_average_pooling(x):
return K.mean(x, axis = (2, 3))
def global_average_pooling_shape(input_shape):
return input_shape[0:2]
模型摘要:
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 3, 3, 512) 14714688
_________________________________________________________________
lambda_1 (Lambda) (None, 3) 0
_________________________________________________________________
dense_1 (Dense) (None, 4) 16
=================================================================
Total params: 14,714,704
Trainable params: 16
Non-trainable params: 14,714,688
VGG-模型总结:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 100, 100, 3) 0
...
我是 Grad-CAM 的新手,我不确定我是否只是在监督某些事情,或者我是否误解了整个概念。
对于 Sequential,层是使用 add() 方法添加的。在这种情况下,由于模型对象是直接添加的,现在模型有两个输入 - 一个通过 Sequential,另一个通过 model_vgg16_conv。
>>> layer = model.layers[0]
>>> layer.get_input_at(0)
<tf.Tensor 'input_1:0' shape=(?, ?, ?, 3) dtype=float32>
>>> layer.get_input_at(1)
<tf.Tensor 'vgg16_input:0' shape=(?, ?, ?, 3) dtype=float32>
由于 K.function,只提供了一个输入,因此 'vgg16_input' 出现缺少输入的错误。这行得通,
get_output = K.function([input1] + [model.input], [final_conv_layer.output, output1])
[conv_outputs, predictions] = get_output([image, image])
但是函数 API 可以像这样在这种情况下使用:
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(100, 100, 3))
gavg = Lambda(global_average_pooling, output_shape=global_average_pooling_shape)(model_vgg16_conv.output)
output = Dense(4, activation = 'softmax', init='uniform')(gavg)
model_f = Model(model_vgg16_conv.input, output)
final_conv_layer = model_f.get_layer("block5_conv3")
get_output = K.function([model_f.input], [final_conv_layer.output, model_f.output])
[conv_outputs, predictions] = get_output([image])