使用 OpenCV 进行网络摄像头颜色校准

Webcam color calibration using OpenCV

为一个项目在线订购了六个网络摄像头后,我注意到输出的颜色不一致。

为了弥补这一点,我尝试拍摄模板图像并提取 R、G 和 B 直方图,并尝试在此基础上匹配目标图像的 RGB 直方图。

这是从对一个非常相似的问题的解决方案的描述中得到启发的 Comparative color calibration

完美的解决方案如下所示:

为了尝试解决这个问题,我编写了以下性能不佳的脚本:

编辑(感谢@DanMašek 和@api55)

import numpy as np

def show_image(title, image, width = 300):
    # resize the image to have a constant width, just to
    # make displaying the images take up less screen real
    # estate
    r = width / float(image.shape[1])
    dim = (width, int(image.shape[0] * r))
    resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)

    # show the resized image
    cv2.imshow(title, resized)


def hist_match(source, template):
    """
    Adjust the pixel values of a grayscale image such that its histogram
    matches that of a target image

    Arguments:
    -----------
        source: np.ndarray
            Image to transform; the histogram is computed over the flattened
            array
        template: np.ndarray
            Template image; can have different dimensions to source
    Returns:
    -----------
        matched: np.ndarray
            The transformed output image
    """

    oldshape = source.shape
    source = source.ravel()
    template = template.ravel()

    # get the set of unique pixel values and their corresponding indices and
    # counts
    s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
                                            return_counts=True)
    t_values, t_counts = np.unique(template, return_counts=True)

    # take the cumsum of the counts and normalize by the number of pixels to
    # get the empirical cumulative distribution functions for the source and
    # template images (maps pixel value --> quantile)
    s_quantiles = np.cumsum(s_counts).astype(np.float64)
    s_quantiles /= s_quantiles[-1]
    t_quantiles = np.cumsum(t_counts).astype(np.float64)
    t_quantiles /= t_quantiles[-1]

    # interpolate linearly to find the pixel values in the template image
    # that correspond most closely to the quantiles in the source image
    interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)

    return interp_t_values[bin_idx].reshape(oldshape)

from matplotlib import pyplot as plt
from scipy.misc import lena, ascent
import cv2

source = cv2.imread('/media/somadetect/Lexar/color_transfer_data/1/frame10.png')
s_b = source[:,:,0]
s_g = source[:,:,1]
s_r = source[:,:,2]
template =  cv2.imread('/media/somadetect/Lexar/color_transfer_data/5/frame6.png')
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]

matched_b = hist_match(s_b, t_b)
matched_g = hist_match(s_g, t_g)
matched_r = hist_match(s_r, t_r)

y,x,c = source.shape
transfer  = np.empty((y,x,c), dtype=np.uint8)

transfer[:,:,0] = matched_r
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_b

show_image("Template", template)
show_image("Target", source)
show_image("Transfer", transfer)
cv2.waitKey(0)

模板图片:

目标图片:

匹配的图像:

然后我在下面找到了 Adrian (pyimagesearch) 试图解决一个非常相似的问题 link

Fast Color Transfer

结果似乎还不错,但有一些饱和缺陷。我欢迎任何有关如何解决此问题的建议或指示,以便可以校准所有网络摄像头输出以基于一个模板图像输出相似的颜色。

您的脚本性能不佳,因为您使用了错误的索引。

OpenCV 图像是 BGR,所以这在您的代码中是正确的:

source = cv2.imread('/media/somadetect/Lexar/color_transfer_data/1/frame10.png')
s_b = source[:,:,0]
s_g = source[:,:,1]
s_r = source[:,:,2]
template =  cv2.imread('/media/somadetect/Lexar/color_transfer_data/5/frame6.png')
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]

但这是错误的

transfer[:,:,0] = matched_r
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_b

因为这里你使用的是 RGB 而不是 BGR,所以颜色改变了,你的 OpenCV 仍然认为它是 BGR。这就是为什么它看起来很奇怪。

应该是:

transfer[:,:,0] = matched_b
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_r

作为其他可能的解决方案,您可以尝试查看您的相机中可以设置哪些参数。有时他们有一些自动参数,您可以手动设置这些参数以匹配所有参数。此外,请注意此自动参数,通常白平衡和对焦以及其他设置为自动,它们可能会在同一台相机中从一次到另一次发生很大变化(取决于照明等)。

更新:

正如 DanMašek 指出的那样,

t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]

是错误的,因为 r 应该是索引 2 和 g 索引 1

t_b = source[:,:,0]
t_g = source[:,:,1]
t_r = source[:,:,2]

我尝试过基于白色补丁的校准程序。这里是 link https://theiszm.wordpress.com/tag/white-balance/.

代码片段如下:

import cv2
import math
import numpy as np
import sys
from matplotlib import pyplot as plt

def hist_match(source, template):
    """
    Adjust the pixel values of a grayscale image such that its histogram
    matches that of a target image

    Arguments:
    -----------
        source: np.ndarray
            Image to transform; the histogram is computed over the flattened
            array
        template: np.ndarray
            Template image; can have different dimensions to source
    Returns:
    -----------
        matched: np.ndarray
            The transformed output image
    """

    oldshape = source.shape
    source = source.ravel()
    template = template.ravel()

    # get the set of unique pixel values and their corresponding indices and
    # counts
    s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
                                            return_counts=True)
    t_values, t_counts = np.unique(template, return_counts=True)

    # take the cumsum of the counts and normalize by the number of pixels to
    # get the empirical cumulative distribution functions for the source and
    # template images (maps pixel value --> quantile)
    s_quantiles = np.cumsum(s_counts).astype(np.float64)
    s_quantiles /= s_quantiles[-1]
    t_quantiles = np.cumsum(t_counts).astype(np.float64)
    t_quantiles /= t_quantiles[-1]

    # interpolate linearly to find the pixel values in the template image
    # that correspond most closely to the quantiles in the source image
    interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)
    return interp_t_values[bin_idx].reshape(oldshape)

# Read original image
im_o = cv2.imread('/media/Lexar/color_transfer_data/5/frame10.png')
im = im_o
cv2.imshow('Org',im)
cv2.waitKey()

B = im[:,:, 0]
G = im[:,:, 1]
R = im[:,:, 2]

R= np.array(R).astype('float')
G= np.array(G).astype('float')
B= np.array(B).astype('float')

# Extract pixels that correspond to pure white R = 255,G = 255,B = 255
B_white = R[168, 351]
G_white = G[168, 351]
R_white = B[168, 351]

print B_white
print G_white
print R_white

# Compensate for the bias using normalization statistics
R_balanced = R / R_white
G_balanced = G / G_white
B_balanced = B / B_white

R_balanced[np.where(R_balanced > 1)] = 1
G_balanced[np.where(G_balanced > 1)] = 1
B_balanced[np.where(B_balanced > 1)] = 1

B_balanced=B_balanced * 255
G_balanced=G_balanced * 255
R_balanced=R_balanced * 255

B_balanced= np.array(B_balanced).astype('uint8')
G_balanced= np.array(G_balanced).astype('uint8')
R_balanced= np.array(R_balanced).astype('uint8')

im[:,:, 0] = (B_balanced)
im[:,:, 1] = (G_balanced)
im[:,:, 2] = (R_balanced)

# Notice saturation artifacts 
cv2.imshow('frame',im)
cv2.waitKey()

# Extract the Y plane in original image and match it to the transformed image 
im_o = cv2.cvtColor(im_o, cv2.COLOR_BGR2YCR_CB)
im_o_Y = im_o[:,:,0]

im = cv2.cvtColor(im, cv2.COLOR_BGR2YCR_CB)
im_Y = im[:,:,0]

matched_y = hist_match(im_o_Y, im_Y)
matched_y= np.array(matched_y).astype('uint8')
im[:,:,0] = matched_y

im_final = cv2.cvtColor(im, cv2.COLOR_YCR_CB2BGR)
cv2.imshow('frame',im_final)
cv2.waitKey()

输入图像为:

脚本的结果是:

谢谢大家的建议和指点!!