使用点特征匹配在 MATLAB 上的帧上不丢失 RGB 颜色的视频稳定

Video Stabilization Using Point Feature Matching WITHOUT LOSING RGB COLORS on frames on MATLAB

我想在不丢失其 3 个颜色通道 (RGB) 的情况下稳定四轴飞行器在交通十字路口拍摄的 13 分钟视频。 Matlab 自身的函数会导致灰度视频,这对于主要和未来的 objective 车辆跟踪来说是不受欢迎的情况。新想法受到赞赏。

您可以在下面找到我自己的代码(工作并将视频转换为灰度),该代码是在下一页上编写的 Matlab 自己的脚本上编辑的: Matlab's related Webpage : Video Stabilization Using Point Feature Matching

clc; clear all; close all;

filename = 'Quad_video_erst';
hVideoSrc = vision.VideoFileReader('Quad_video_erst.mp4', 'ImageColorSpace', 'Intensity');

% Create and open video file
myVideo = VideoWriter('vivi.avi');        
open(myVideo);
hVPlayer = vision.VideoPlayer;   

%% Step 1: Read Frames from a Movie File

for i=1:10 % testing for a short run 

    imgA = step(hVideoSrc); % Read first frame into imgA
    imgB = step(hVideoSrc); % Read second frame into imgB


%% Step 2: SURF DETECTION

pointsA=surf_function_CAN(imgA);
pointsB=surf_function_CAN(imgB);



%% Step 3. Select Correspondences Between Points
% Extract FREAK descriptors for the corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);

indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);


%% Step 4: Estimating Transform from Noisy Correspondences
[tform, pointsBm, pointsAm] = estimateGeometricTransform(...
    pointsB, pointsA, 'affine');
imgBp = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
pointsBmp = transformPointsForward(tform, pointsBm.Location);


%% Step 5: Step 5. Transform Approximation and Smoothing

% Extract scale and rotation part sub-matrix.
H = tform.T;
R = H(1:2,1:2);
% Compute theta from mean of two possible arctangents
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
% Compute scale from mean of two stable mean calculations
scale = mean(R([1 4])/cos(theta));
% Translation remains the same:
translation = H(3, 1:2);
% Reconstitute new s-R-t transform:
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)];...
  translation], [0 0 1]'];
tformsRT = affine2d(HsRt);

imgBold = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
imgBsRt = imwarp(imgB, tformsRT, 'OutputView', imref2d(size(imgB)));


%% Write the Video
writeVideo(myVideo,imfuse(imgBold,imgBsRt,'ColorChannels','red-cyan'));


end

以及函数:

function [ surf_points ] = surf_function_CAN(img)


surfpoints_raw= detectSURFFeatures(img);
[featuresOriginal,  validPtsOriginal]  = extractFeatures(img,  surfpoints_raw);
strongestPoints = validPtsOriginal.selectStrongest(1600);

array=strongestPoints.Location;

% New - Get X and Y coordinates

X = array(:,1);
Y = array(:,2);

% New - Determine a mask to grab the points we want

ind = (((X>156-9-70 & X<156+9+70) & (Y>406-9-70 & Y<406+9+70)) | ...
((X>684-11-70 & X<684+11+70) & (Y>274-11-70 & Y<274+11+70)) | ...
((X>1066-15-70 & X<1066+15+70) & (Y>67-15-70 & Y<67+15+70)) | ...
((X>1559-15-70 & X<1559+15+70) & (Y>867-15-70 & Y<867+15+70)) | ...
((X>1082-18-70 & X<1082+18+70) & (Y>740-18-100 & Y<740+18+100)))  ;

% New - Create new SURFPoints structure that contains all information
% from the points we need

array_filtered =strongestPoints(ind);
surf_points= array_filtered;

end

首先,如果您仔细阅读他们的示例,您应该使用他们执行循环的部分,而不是他们展示如何在 2 帧之间实现循环的部分,因为它们并不完全兼容。除此之外,您唯一需要做的就是对灰度图像进行分析,但对彩色图像进行变换:

%% Load Video and Open Save File
filename = 'shaky_car.avi';
hVideoSrc = vision.VideoFileReader(filename);

myVideo = VideoWriter('vivi.avi');
open(myVideo);

% Get next Image
colorImg = step(hVideoSrc);
% Try to Convert to Grayscale
try
    imgB = rgb2gray(colorImg);
    RGB = true;
catch % Image is not RGB
    imgB = colorImg;
    RGB = false;
end

Hcumulative = eye(3);
ptThresh = 0.1;
% Loop Through Video
while ~isDone(hVideoSrc)
    imgA = imgB;
    % Get Next Image
    colorImg = step(hVideoSrc);
    % Convert to Grayscale
    if RGB
        imgB = rgb2gray(colorImg);
    else
        imgB = colorImg;
    end

    %% Calculate Transformation
    % Generate Prospective Points
    pointsA = detectFASTFeatures(imgA, 'MinContrast', ptThresh);
    pointsB = detectFASTFeatures(imgB, 'MinContrast', ptThresh);

    % Extract Features for the Corners
    [featuresA, pointsA] = extractFeatures(imgA, pointsA);
    [featuresB, pointsB] = extractFeatures(imgB, pointsB);

    indexPairs = matchFeatures(featuresA, featuresB);
    pointsA = pointsA(indexPairs(:, 1), :);
    pointsB = pointsB(indexPairs(:, 2), :);

    [tform] = estimateGeometricTransform(pointsB, pointsA, 'affine');

    % Extract Rotation & Translations
    H = tform.T;
    R = H(1:2,1:2);

    theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);

    scale = mean(R([1 4])/cos(theta));

    translation = H(3, 1:2);

    % Reconstitute Trnasform
    HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)]; ...
        translation], [0 0 1]'];
    Hcumulative = HsRt*Hcumulative;

    % Perform Transformation on Color Image
    img = imwarp(colorImg, affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));

    % Save Transformed Color Image to Video File
    writeVideo(myVideo,img)
end
close(myVideo)