无法解决这个图像分割问题

Can't figure out to solve this image segmentation problem

我的训练图像由从一些拼接图像的 ELA(误差水平分析)中提取的蓝色通道组成,标签仅包含其相应的地面真值掩码。

我已经构建了一个简单的编码器-解码器 CNN,下面给出了它来进行分割,并且还在细胞膜分割任务上对其进行了测试。它在那里表现良好并创建接近地面真实图像,所以我想我创建的神经网络足够强大。

然而,它不适用于CASIA1 + CASIA1GroundTruth数据集上的拼接图像。请帮助我修复它,我花了太多天时间尝试不同的架构和图像预处理但没有成功。

输入图像

真实情况

Output/Generated 图片

其一,它声称如此高的准确度 (98%) 和低损失,但输出图像却如此错误。如果你仔细看,它有点像你想要的面具,但伴随着它,有很多区域散布着白色。似乎无法获得所需区域与背景的像素强度差异。请帮我解决它:(

准备

def process(img):
    img=img.getchannel('B')
    return img

for i in splicedIMG:
  img=process(Image.open('ELAs/'+str(i)))
  X.append(np.array(img)/np.max(img))

for i in splicedGT:
  lbl=Image.open('SGTResized/'+str(i))
  Y.append(np.array(lbl)/np.max(lbl))

X = np.array(X)
Y = np.array(Y)

X = X.reshape(-1, 256,256, 1)
Y = Y.reshape(-1, 256,256, 1)

X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size = 0.2)

分段器模型

model = Sequential()

model.add(Conv2D(filters = 16, kernel_size = (3,3),padding = 'same', 
                 activation ='relu', input_shape = (256,256,1)))
model.add(BatchNormalization())
model.add(Conv2D(filters = 16, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(UpSampling2D(size = (2,2)))
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())

model.add(UpSampling2D(size = (2,2)))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())

model.add(UpSampling2D(size = (2,2)))
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())

model.add(UpSampling2D(size = (2,2)))
model.add(Conv2D(filters = 16, kernel_size = (3,3),padding = 'same', 
                 activation ='relu', input_shape = (256,256,3)))
model.add(BatchNormalization())
model.add(Conv2D(filters = 16, kernel_size = (3,3),padding = 'same', 
                 activation ='relu'))
model.add(BatchNormalization())

model.add(Conv2D(filters = 1, kernel_size = (1,1), activation = 'sigmoid'))

model.summary()

培训

model.compile(optimizer = Adam(lr = 0.0001), loss = 'binary_crossentropy', metrics = ['accuracy'])

model_checkpoint = ModelCheckpoint('segmenter_weights.h5', monitor='loss',verbose=1, save_best_only=True)

model.fit(X_train, Y_train, validation_data = (X_val, Y_val), batch_size=4, epochs=200, verbose=1, callbacks=[PlotLossesCallback(),model_checkpoint])

哎呀,我做了一个愚蠢的。为了查看我从 X 数组中提取的用于测试的内容,我将该数组乘以 255,因为 PIL 不显示 0-1 范围内的数组。错误地,我只是使用了相同的修饰变量并在 test/prediction.

中传递了它