图像分割:图像和标签 ID 不匹配,以便在预测步骤中评估结果

Image segmentation: Image and label IDs don't match in order to evaluate the results in prediction step

我有一个图像和图像蒙版的数据集,这些图像和蒙版为神经网络提供数据。培训过程结束后,我想直观地评估结果。因此,我开发了一个功能,以便使用 Keras ImageDataGenerator class、Numpy 和 Matplotlib 在 3 x 3 网格中显示参考图像、相关蒙版图像和预测图像。但是在显示图像时,参考图像和蒙版图像不相关。他们没有相同的 ID。

例如代码可以显示如下:

[ ref_image_21, mask_image_43, predicted_image ]
[ ref_image_3, mask_image_38, predicted_image ]
[ ref_image_200, mask_image_12, predicted_image ]

代码如下:

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np

target_size = (512, 512)

image_datagen = ImageDataGenerator(rescale=1./255)
mask_datagen = ImageDataGenerator()
test_image_generator = image_datagen.flow_from_directory('path/to/val_imgs', target_size=target_size, class_mode=None, batch_size = 6)
test_mask_generator = mask_datagen.flow_from_directory('path/to/val_labels/', target_size=target_size, class_mode=None, batch_size = 6)

def combine_generator(gen1, gen2, batch_list=6,training=True):

    while True:
        image_batch, label_batch=next(gen1)[0], np.expand_dims(next(gen2)[0][:,:,0],axis=-1)
        image_batch, label_batch=np.expand_dims(image_batch,axis=0),np.expand_dims(label_batch,axis=0)

        for i in range(batch_list-1):
            image_i,label_i = next(gen1)[0], np.expand_dims(next(gen2)[0][:,:,0],axis=-1)
            image_i, label_i=np.expand_dims(image_i,axis=0),np.expand_dims(label_i,axis=0)
            image_batch=np.concatenate([image_batch,image_i],axis=0)
            label_batch=np.concatenate([label_batch,label_i],axis=0)
            
        yield((image_batch,label_batch))

test_generator = combine_generator(test_image_generator, test_mask_generator,training=True)

def show_predictions_in_test(model_name, generator=None, num=3):
    if generator ==None:
        generator = test_generator
    for i in range(num):
        image, mask=next(generator)
        sample_image, sample_mask= image[1], mask[1]
        image = np.expand_dims(sample_image, axis=0)
        pr_mask = model_name.predict(image)
        pr_mask=np.expand_dims(pr_mask[0].argmax(axis=-1),axis=-1)
        display([sample_image, sample_mask,pr_mask])
    
def display(display_list,title=['Input Image', 'True Mask', 'Predicted Mask']):
    plt.figure(figsize=(15, 15))
    for i in range(len(display_list)):
        plt.subplot(1, len(display_list), i+1)
        plt.title(title[i])
        plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i]),cmap='magma')
        plt.axis('off')
    plt.show()

show_predictions_in_test(model)

我做错了什么?

我终于找到了解决办法。我必须在 test_image_generator 和 test_mask_generator 中添加并初始化种子参数。因此,如果我们替换下面的行:

test_image_generator = image_datagen.flow_from_directory('path/to/val_imgs', target_size=target_size, class_mode=None, batch_size = 6)
test_mask_generator = mask_datagen.flow_from_directory('path/to/val_labels/', target_size=target_size, class_mode=None, batch_size = 6)

与:

seed = np.random.randint(0,1e5)
test_image_generator = image_datagen.flow_from_directory('path/to/val_imgs/', seed=seed, target_size=target_size, class_mode=None, batch_size = 6)
test_mask_generator = mask_datagen.flow_from_directory('path/to/val_labels/', seed=seed, target_size=target_size, class_mode=None, batch_size = 6)

上面的代码正在运行并显示如下图像:

[ ref_image_21, mask_image_21, predicted_image ]
[ ref_image_3, mask_image_3, predicted_image ]
[ ref_image_200, mask_image_200, predicted_image ]