在 Matlab 中删除彩色线条

Removing colored lines in Matlab

我正在尝试删除 Matlab 中一系列图像中的彩色线条(特别是黄色和蓝色线条)。可以在此处找到示例图像:

我能够使用基本阈值分割出蓝色线段。我还能够使用阈值分割出黄色线段内的亮黄色圆圈。最后,我正在使用带有 houghlines 函数和掩码的 hough 变换来移除线段的剩余元素。

是否有更优雅的方法来执行此操作,或者我是否坚持使用这种方法组合?

谢谢

编辑:我发现霍夫变换只是从我的图像中移除单个像素,而不是整条黄线。我正在考虑围绕检测到的像素进行扩张并检查相似性,但我担心黄线与背景颜色太相似(它的位置可能会改变,以至于它没有完全跟踪它现在恰好结束的黑暗背景).任何建议将不胜感激。

%% This block was intended to deal with another data 
set this function has to analyze, but it actually ended up removing my 
yellow circles as well, making a further threshold step unnecessary so far 

% Converts to a binary image containing almost exclusively lines and crosshairs
mask = im2bw(rgb_img, 0.8);

% Invert mask
mask = ~mask;

% Remove detected lines and crosshairs by setting to 0
rgb_img(repmat(~mask, [1, 1, 3])) = 0;

%% Removes blue targetting lines if present

% Define thresholds for RGB channel 3 based on histogram settings to remove
% blue lines

channel3Min = 0.000;
channel3Max = 0.478;

% Create mask based on chosen histogram thresholds
noBlue = (rgb_img(:,:,3) >= channel3Min ) & (rgb_img(:,:,3) <= channel3Max);

% Set background pixels where noBlue is false to zero.
rgb_img(repmat(~noBlue,[1 1 3])) = 0;

%% Removes any other targetting lines if present

imageGreyed = rgb2gray(rgb_img);

% Performs canny edge detection
BW = edge(imageGreyed, 'canny');

% Computes the hough transform
[H,theta,rho] = hough(BW);

% Finds the peaks in the hough matrix
P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));

% Finds any large lines present in the image
lines = houghlines(BW,theta,rho,P,'FillGap',5,'MinLength',100);

colEnd = [];
rowEnd = [];

for i = 1:length(lines)

    % Extracts line start and end points from houghlines output

    pointHold = lines(i).point1;
    colEnd = [colEnd pointHold(1)];
    rowEnd = [rowEnd pointHold(2)];

    pointHold = lines(i).point2;
    colEnd = [colEnd pointHold(1)];
    rowEnd = [rowEnd pointHold(2)];

    % Creates a line segment from the line endpoints using a simple linear regression
    fit = polyfit(colEnd, rowEnd, 1);

    % Creates index of "x" (column) values to be fed into regression
    colIndex = (colEnd(1):colEnd(2));

    rowIndex = [];

    % Obtains "y" (row) pixel values from regression

    for i = colIndex

        rowHold = fit(1) * i + fit(2);
        rowIndex = [rowIndex rowHold];

    end

    % Round regression output
    rowIndex = round(rowIndex);

    % Assemble coordinate matrix
    lineCoordinates = [colIndex; rowIndex]';

    rgbDim = size(rgb_img);

    % Create mask based on input image size
    yellowMask = ones(rgbDim(1), rgbDim(2));

    for i = 1:length(rowIndex)

        yellowMask(rowIndex(i), colIndex(i)) = 0;

    end

    % Remove the lines found by hough transform
    rgb_img(repmat(~yellowMask,[1 1 3])) = 0;

end 

end

我简单测试了给出的例子 http://de.mathworks.com/help/images/examples/color-based-segmentation-using-k-means-clustering.html?prodcode=IP&language=en

使用您的图片:

he = imread('HlQVN.jpg');
imshow(he)
cform = makecform('srgb2lab');
lab_he = applycform(he,cform);
ab = double(lab_he(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
% repeat the clustering 3 times to avoid local minima
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
                                      'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);

for k = 1:nColors
    color = he;
    color(rgb_label ~= k) = 0;
    segmented_images{k} = color;
end
imshow(segmented_images{1}), title('objects in cluster 1');

这已经很好地识别了蓝线。

本文 post 不会涉及问题的图像处理方面,而只是关注实现并会提出改进现有代码的方法。现在,代码在每次循环迭代时都有 polyfit 计算,我不确定是否可以对其进行矢量化。因此,让我们尝试向量化循环内的其余代码,并希望这会为整体代码带来一些加速。我想提出的更改是最内层循环中的两个步骤。

1) 替换-

rowIndex=[]
for i = colIndex
    rowHold = fit(1) * i + fit(2)
    rowIndex = [rowIndex rowHold];    
end

与 -

rowIndex = fit(1)*colIndex + fit(2)

2) 替换-

yellowMask = ones(rgbDim(1), rgbDim(2));
for i = 1:length(rowIndex)
    yellowMask(rowIndex(i), colIndex(i)) = 0;
end
rgb_img(repmat(~yellowMask,[1 1 3])) = 0;

与 -

idx1 = (colIndex-1)*rgbDim(1) + rowIndex
rgb_img(bsxfun(@plus,idx1(:),[0:rgbDim(3)-1]*rgbDim(1)*rgbDim(2))) = 0;

事实证明,答案涉及将图像转换为 Lab 色彩空间并执行 treshholding。这在图像的其余部分以最小的损失分割出了线条。代码如下:

    % Convert RGB image to L*a*b color space for thresholding
    rgb_img = im2double(rgb_img);
    cform = makecform('srgb2lab', 'AdaptedWhitePoint', whitepoint('D65'));
    I = applycform(rgb_img,cform);

    % Define thresholds for channel 2 based on histogram settings
    channel2Min = -1.970;
    channel2Max = 48.061;

    % Create mask based on chosen histogram threshold
    BW = (I(:,:,2) <= channel2Min ) | (I(:,:,2) >= channel2Max);

    % Determines the eccentricity for regions of pixels; basically how line-like
    % (vals close to 1) or circular (vals close to 0) the region is
    rp = regionprops(BW, 'PixelIdxList', 'Eccentricity');

    % Selects for regions which are not line segments (areas which
    % may have been incorrectly thresholded out with the crosshairs)
    rp = rp([rp.Eccentricity] < 0.99); 

    % Removes the non-line segment regions from the mask
    BW(vertcat(rp.PixelIdxList)) = false;

    % Set background pixels where BW is false to zero.
    rgb_img(repmat(BW,[1 1 3])) = 0;