如何检测与背景融合的物体?
How to detect an object that blends with the background?
我是初学者,我正在尝试对左侧与背景颜色相同的白色遥控器应用轮廓。
a = cv2.imread(file_name)
imgGray = cv2.cvtColor(a,cv2.COLOR_BGR2GRAY)
imgGray = cv2.GaussianBlur(imgGray,(11,11),20)
k5 = np.array([[-1,-1,-1],[-1,9,-1],[-1,-1,-1]])
imgGray = cv2.filter2D(imgGray,-1,k5)
cv2.namedWindow("Control")
cv2.createTrackbar("blocksize","Control",33,1000,f)
cv2.createTrackbar("c","Control",3,100,f)
while True:
strel = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
blocksize = cv2.getTrackbarPos("blocksize","Control")
c = cv2.getTrackbarPos("c","Control")
if blocksize%2==0:
blocksize += 1
thrash = cv2.adaptiveThreshold(imgGray,255,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV,blockSize=blocksize,C=c)
thrash1 = cv2.adaptiveThreshold(imgGray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,blockSize=blocksize,C=c)
cv2.imshow("mean",thrash)
cv2.imshow("gaussian",thrash1)
#r,thrash = cv2.threshold(imgGray,150,255,cv2.THRESH_BINARY_INV)
key = cv2.waitKey(1000)
if key == 32 or iter == -1:
break
edges = cv2.Canny(thrash,100,200)
cv2.imshow('sharpen',sharpen)
cv2.imshow('edges',edges)
cv2.imshow('grey ',imgGray)
cv2.imshow('thrash ',thrash)
cv2.waitKey(0)
circles = cv2.HoughCircles(imgGray,cv2.HOUGH_GRADIENT,1,60,param1=240,param2=50,minRadius=0,maxRadius=0)
contours,_ = cv2.findContours(thrash,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
putlabel(circles,a,contours)
这些都是我试过的,我也试过膨胀、腐蚀、开闭等形态学操作,但还是得不到结果。
以下是我最好的结果,但噪音太严重,遥控器没有完全勾勒出来。
我认为简单 image-processing 无法将与背景颜色相同的对象隔离开来。因此,我们必须切换到 deep/machine 学习。我们的想法是 remove the background of the image using U-2-Net 这将为我们提供前景中所有对象的蒙版,然后在白色上使用 HSV 颜色阈值来隔离对象。
这是运行通过U-2-Net去除背景
后的结果mask
Bitwise-and 隔离对象
现在我们可以使用传统的image-processing,因为我们可以区分前景和背景。接下来,我们使用 lower/upper 颜色范围的 HSV 颜色阈值来隔离白色,从而生成此蒙版。您可以使用 HSV color thresholder script to determine the lower/upper ranges.
现在我们简单地执行一些形态学操作来清除任何噪音、找到轮廓并按最大轮廓区域排序。假设最大的轮廓将是我们想要的对象。这是结果
代码
import cv2
import numpy as np
# Load image + mask, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread("1.jpg") # This is the original image
original = image.copy()
mask = cv2.imread("1.png") # This is the mask generated from U-2-Net
gray = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
bg_removed = cv2.bitwise_and(image, image, mask=thresh)
# HSV color thresholding
hsv = cv2.cvtColor(bg_removed, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 0])
upper = np.array([179, 33, 255])
hsv_mask = cv2.inRange(hsv, lower, upper)
isolated = cv2.bitwise_and(bg_removed, bg_removed, mask=hsv_mask)
isolated = cv2.cvtColor(isolated, cv2.COLOR_BGR2GRAY)
isolated = cv2.threshold(isolated, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Morph operations to remove small artifacts and noise
open_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(isolated, cv2.MORPH_OPEN, open_kernel, iterations=1)
close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, close_kernel, iterations=1)
# Find contours and sort by largest contour area
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
cv2.drawContours(original, [c], -1, (36,255,12), 3)
break
cv2.imshow("bg_removed", bg_removed)
cv2.imshow("hsv_mask", hsv_mask)
cv2.imshow('isolated', isolated)
cv2.imshow('original', original)
cv2.waitKey()
如果有人有使用简单图像处理而不是 deep/machine 学习的方法,我很想知道怎么做!
我想到了纯图像处理的方法。但结果不如@nathancy
描述的那样准确
理论
TLDR;我正在使用 高斯差分 (DoG),这是一个 2 级边缘检测器。
- 获取灰度图
- 对其执行两种不同的模糊操作。
- 减去模糊图像
模糊操作通常起到抑制高频的作用。通过减去两个不同模糊操作的结果,我们得到一个 band-pass 过滤器。我想引用 this blog “从另一个模糊图像中减去一个模糊图像可以保留位于两个模糊图像中保留的频率范围之间的空间信息”
我写了一个简单的函数,returns 两个模糊图像的区别:
def dog(img, k1, s1, k2, s2):
b1 = cv2.GaussianBlur(img,(k1, k1), s1)
b2 = cv2.GaussianBlur(img,(k2, k2), s2)
return b1 - b2
方法
- 获取灰度图
- 使用不同的内核大小和 sigma 值执行高斯模糊
- 减去模糊图像
- 应用大津阈值
- 找到足够大区域的轮廓
- Select 基于范围的等高线
注意: 范围是等高线的属性,它是等高线面积与其对应的边界矩形区域。 Taken from here
代码和结果
img = cv2.imread('path_to_image', cv2.IMREAD_UNCHANGED)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Function to perform Difference of Gaussians
def difference_of_Gaussians(img, k1, s1, k2, s2):
b1 = cv2.GaussianBlur(img,(k1, k1), s1)
b2 = cv2.GaussianBlur(img,(k2, k2), s2)
return b1 - b2
DoG_img = difference_of_Gaussians(gray, 7, 7, 17, 13)
如您所见,它起到了边缘检测器的作用。您可以改变内核大小 (k1, k2
) 和 sigma 值 (s1, s2
)
# Applying Otsu Threshold and finding contours
th = cv2.threshold(DoG_img ,127,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
contours, hierarchy = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create copy of original image
img1 = img.copy()
# for each contour above certain area and extent, draw minimum bounding box
for c in contours:
area = cv2.contourArea(c)
if area > 1500:
x,y,w,h = cv2.boundingRect(c)
extent = int(area)/w*h
if extent > 2000:
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img1,[box],0,(0,255,0),4)
如您所见,结果并不完美。物体的阴影也在边缘检测过程中被捕获(高斯差分)。您可以尝试改变参数来检查结果是否变得更好。
我是初学者,我正在尝试对左侧与背景颜色相同的白色遥控器应用轮廓。
a = cv2.imread(file_name)
imgGray = cv2.cvtColor(a,cv2.COLOR_BGR2GRAY)
imgGray = cv2.GaussianBlur(imgGray,(11,11),20)
k5 = np.array([[-1,-1,-1],[-1,9,-1],[-1,-1,-1]])
imgGray = cv2.filter2D(imgGray,-1,k5)
cv2.namedWindow("Control")
cv2.createTrackbar("blocksize","Control",33,1000,f)
cv2.createTrackbar("c","Control",3,100,f)
while True:
strel = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
blocksize = cv2.getTrackbarPos("blocksize","Control")
c = cv2.getTrackbarPos("c","Control")
if blocksize%2==0:
blocksize += 1
thrash = cv2.adaptiveThreshold(imgGray,255,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV,blockSize=blocksize,C=c)
thrash1 = cv2.adaptiveThreshold(imgGray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,blockSize=blocksize,C=c)
cv2.imshow("mean",thrash)
cv2.imshow("gaussian",thrash1)
#r,thrash = cv2.threshold(imgGray,150,255,cv2.THRESH_BINARY_INV)
key = cv2.waitKey(1000)
if key == 32 or iter == -1:
break
edges = cv2.Canny(thrash,100,200)
cv2.imshow('sharpen',sharpen)
cv2.imshow('edges',edges)
cv2.imshow('grey ',imgGray)
cv2.imshow('thrash ',thrash)
cv2.waitKey(0)
circles = cv2.HoughCircles(imgGray,cv2.HOUGH_GRADIENT,1,60,param1=240,param2=50,minRadius=0,maxRadius=0)
contours,_ = cv2.findContours(thrash,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
putlabel(circles,a,contours)
这些都是我试过的,我也试过膨胀、腐蚀、开闭等形态学操作,但还是得不到结果。
以下是我最好的结果,但噪音太严重,遥控器没有完全勾勒出来。
我认为简单 image-processing 无法将与背景颜色相同的对象隔离开来。因此,我们必须切换到 deep/machine 学习。我们的想法是 remove the background of the image using U-2-Net 这将为我们提供前景中所有对象的蒙版,然后在白色上使用 HSV 颜色阈值来隔离对象。
这是运行通过U-2-Net去除背景
后的结果maskBitwise-and 隔离对象
现在我们可以使用传统的image-processing,因为我们可以区分前景和背景。接下来,我们使用 lower/upper 颜色范围的 HSV 颜色阈值来隔离白色,从而生成此蒙版。您可以使用 HSV color thresholder script to determine the lower/upper ranges.
现在我们简单地执行一些形态学操作来清除任何噪音、找到轮廓并按最大轮廓区域排序。假设最大的轮廓将是我们想要的对象。这是结果
代码
import cv2
import numpy as np
# Load image + mask, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread("1.jpg") # This is the original image
original = image.copy()
mask = cv2.imread("1.png") # This is the mask generated from U-2-Net
gray = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
bg_removed = cv2.bitwise_and(image, image, mask=thresh)
# HSV color thresholding
hsv = cv2.cvtColor(bg_removed, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 0])
upper = np.array([179, 33, 255])
hsv_mask = cv2.inRange(hsv, lower, upper)
isolated = cv2.bitwise_and(bg_removed, bg_removed, mask=hsv_mask)
isolated = cv2.cvtColor(isolated, cv2.COLOR_BGR2GRAY)
isolated = cv2.threshold(isolated, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Morph operations to remove small artifacts and noise
open_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(isolated, cv2.MORPH_OPEN, open_kernel, iterations=1)
close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, close_kernel, iterations=1)
# Find contours and sort by largest contour area
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
cv2.drawContours(original, [c], -1, (36,255,12), 3)
break
cv2.imshow("bg_removed", bg_removed)
cv2.imshow("hsv_mask", hsv_mask)
cv2.imshow('isolated', isolated)
cv2.imshow('original', original)
cv2.waitKey()
如果有人有使用简单图像处理而不是 deep/machine 学习的方法,我很想知道怎么做!
我想到了纯图像处理的方法。但结果不如@nathancy
描述的那样准确理论
TLDR;我正在使用 高斯差分 (DoG),这是一个 2 级边缘检测器。
- 获取灰度图
- 对其执行两种不同的模糊操作。
- 减去模糊图像
模糊操作通常起到抑制高频的作用。通过减去两个不同模糊操作的结果,我们得到一个 band-pass 过滤器。我想引用 this blog “从另一个模糊图像中减去一个模糊图像可以保留位于两个模糊图像中保留的频率范围之间的空间信息”
我写了一个简单的函数,returns 两个模糊图像的区别:
def dog(img, k1, s1, k2, s2):
b1 = cv2.GaussianBlur(img,(k1, k1), s1)
b2 = cv2.GaussianBlur(img,(k2, k2), s2)
return b1 - b2
方法
- 获取灰度图
- 使用不同的内核大小和 sigma 值执行高斯模糊
- 减去模糊图像
- 应用大津阈值
- 找到足够大区域的轮廓
- Select 基于范围的等高线
注意: 范围是等高线的属性,它是等高线面积与其对应的边界矩形区域。 Taken from here
代码和结果
img = cv2.imread('path_to_image', cv2.IMREAD_UNCHANGED)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Function to perform Difference of Gaussians
def difference_of_Gaussians(img, k1, s1, k2, s2):
b1 = cv2.GaussianBlur(img,(k1, k1), s1)
b2 = cv2.GaussianBlur(img,(k2, k2), s2)
return b1 - b2
DoG_img = difference_of_Gaussians(gray, 7, 7, 17, 13)
如您所见,它起到了边缘检测器的作用。您可以改变内核大小 (k1, k2
) 和 sigma 值 (s1, s2
)
# Applying Otsu Threshold and finding contours
th = cv2.threshold(DoG_img ,127,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
contours, hierarchy = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create copy of original image
img1 = img.copy()
# for each contour above certain area and extent, draw minimum bounding box
for c in contours:
area = cv2.contourArea(c)
if area > 1500:
x,y,w,h = cv2.boundingRect(c)
extent = int(area)/w*h
if extent > 2000:
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img1,[box],0,(0,255,0),4)
如您所见,结果并不完美。物体的阴影也在边缘检测过程中被捕获(高斯差分)。您可以尝试改变参数来检查结果是否变得更好。