鉴于从相机(校准)到我的物体的距离是固定的,我如何测量物体的宽度?
How do i measure the width of an object given that the distance from the camera(calibrated) to my object is fixed?
对不起,我对编码完全陌生。首先,为了这个项目的目的,我正在使用 Python 绑定到 OpenCV 的库。
我的相机已针对显示鱼眼失真进行了校准。我分别获得了 K 和 D 的以下值,即固有相机矩阵和畸变矩阵:
K = [[438.76709 0.00000 338.13894]
[0.00000 440.79169 246.80081]
[0.00000 0.00000 1.00000]]
D = [-0.098034379506 0.054022224927 -0.046172648829 -0.009039512970]
Focal length: 2.8mm
Field of view: 145 degrees (from manual)
当我取消扭曲图像并显示它时,我得到的图像在被拉伸过远的区域有黑色像素(预期)。但是,这不会妨碍对象宽度的计算,因为对象并不大,只占图像的 20%。
我会将物体放置在距相机镜头 10 厘米 的位置。根据我在针孔相机模型上阅读的内容,我将需要控制 3D 到 2D 转换的外部参数。但是,我不确定我应该如何得出它。
假设我有 2 个点的像素坐标(每个点沿着我想要测量距离的边缘),我如何使用这些派生矩阵找到这两个点之间的真实世界距离?
另外,如果我的矩形物体不平行于相机的主轴,即使在这种情况下,是否有计算宽度的算法?
我会使用相似的三角形来确定图像中的宽度与对象宽度成正比,比例因子为 (distance of camera to object)/(focal length)
,在您的情况下为 100/2.8
。这将假设对象位于图像的中心(即直接在相机前面)。
鉴于你的相机和物体之间的距离是固定的,你可以做的是先找出找到的角之间的像素距离,然后将其转换为毫米 使用每毫米像素比/比例因子为您的对象宽度。
使用的算法是Harris Corner Detection Harris Corner Detection
捕获其中包含对象的帧
cap = cv2.VideoCapture(0)
while(True):
#Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('LIVE FRAME!', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#Save it to some location
cv2.imwrite('Your location', frame)
首先使用参考物体校准每毫米像素比。
#Read Image
image = cv2.imread('Location of your previously saved frame with the object in it.')
object_width = input(int("Enter the width of your object: ")
object_height = input(int("Enter the height of your object: ")
#Find Corners
def find_centroids(dst):
ret, dst = cv2.threshold(dst, 0.01 * dst.max(), 255, 0)
dst = np.uint8(dst)
# find centroids
ret, labels, stats, centroids = cv2.connectedComponentsWithStats(dst)
# define the criteria to stop and refine the corners
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100,
0.001)
corners = cv2.cornerSubPix(gray,np.float32(centroids[1:]),(5,5),
(-1,-1),criteria)
return corners
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray, 5, 3, 0.04)
dst = cv2.dilate(dst, None)
# Get coordinates of the corners.
corners = find_centroids(dst)
for i in range(0, len(corners)):
print("Pixels found for this object are:",corners[i])
image[dst>0.1*dst.max()] = [0,0,255]
cv2.circle(image, (int(corners[i,0]), int(corners[i,1])), 7, (0,255,0), 2)
for corner in corners:
image[int(corner[1]), int(corner[0])] = [0, 0, 255]
a = len(corners)
print("Number of corners found:",a)
#List to store pixel difference.
distance_pixel = []
#List to store mm distance.
distance_mm = []
P1 = corners[0]
P2 = corners[1]
P3 = corners[2]
P4 = corners[3]
P1P2 = cv2.norm(P2-P1)
P1P3 = cv2.norm(P3-P1)
P2P4 = cv2.norm(P4-P2)
P3P4 = cv2.norm(P4-P3)
pixelsPerMetric_width1 = P1P2 / object_width
pixelsPerMetric_width2 = P3P4 / object_width
pixelsPerMetric_height1 = P1P3 / object_height
pixelsPerMetric_height2 = P2P4 / object_height
#Average of PixelsPerMetric
pixelsPerMetric_avg = pixelsPerMetric_width1 + pixelsPerMetric_width2 + pixelsPerMetric_height1 + pixelsPerMetric_height2
pixelsPerMetric = pixelsPerMetric_avg / 4
print(pixelsPerMetric)
P1P2_mm = P1P2 / pixelsPerMetric
P1P3_mm = P1P3 / pixelsPerMetric
P2P4_mm = P2P4 / pixelsPerMetric
P3P4_mm = P3P4 / pixelsPerMetric
distance_mm.append(P1P2_mm)
distance_mm.append(P1P3_mm)
distance_mm.append(P2P4_mm)
distance_mm.append(P3P4_mm)
distance_pixel.append(P1P2)
distance_pixel.append(P1P3)
distance_pixel.append(P2P4)
distance_pixel.append(P3P4)
以像素和毫米打印距离,即您的宽度和高度
print(distance_pixel)
print(distance_mm)
The pixelsPerMetric
is your scale factor and gives the average number of pixels per mm. You can modify this code to work accordingly to your needs.
对不起,我对编码完全陌生。首先,为了这个项目的目的,我正在使用 Python 绑定到 OpenCV 的库。
我的相机已针对显示鱼眼失真进行了校准。我分别获得了 K 和 D 的以下值,即固有相机矩阵和畸变矩阵:
K = [[438.76709 0.00000 338.13894]
[0.00000 440.79169 246.80081]
[0.00000 0.00000 1.00000]]
D = [-0.098034379506 0.054022224927 -0.046172648829 -0.009039512970]
Focal length: 2.8mm
Field of view: 145 degrees (from manual)
当我取消扭曲图像并显示它时,我得到的图像在被拉伸过远的区域有黑色像素(预期)。但是,这不会妨碍对象宽度的计算,因为对象并不大,只占图像的 20%。
我会将物体放置在距相机镜头 10 厘米 的位置。根据我在针孔相机模型上阅读的内容,我将需要控制 3D 到 2D 转换的外部参数。但是,我不确定我应该如何得出它。
假设我有 2 个点的像素坐标(每个点沿着我想要测量距离的边缘),我如何使用这些派生矩阵找到这两个点之间的真实世界距离?
另外,如果我的矩形物体不平行于相机的主轴,即使在这种情况下,是否有计算宽度的算法?
我会使用相似的三角形来确定图像中的宽度与对象宽度成正比,比例因子为 (distance of camera to object)/(focal length)
,在您的情况下为 100/2.8
。这将假设对象位于图像的中心(即直接在相机前面)。
鉴于你的相机和物体之间的距离是固定的,你可以做的是先找出找到的角之间的像素距离,然后将其转换为毫米 使用每毫米像素比/比例因子为您的对象宽度。
使用的算法是Harris Corner Detection Harris Corner Detection
捕获其中包含对象的帧
cap = cv2.VideoCapture(0)
while(True):
#Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('LIVE FRAME!', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#Save it to some location
cv2.imwrite('Your location', frame)
首先使用参考物体校准每毫米像素比。
#Read Image
image = cv2.imread('Location of your previously saved frame with the object in it.')
object_width = input(int("Enter the width of your object: ")
object_height = input(int("Enter the height of your object: ")
#Find Corners
def find_centroids(dst):
ret, dst = cv2.threshold(dst, 0.01 * dst.max(), 255, 0)
dst = np.uint8(dst)
# find centroids
ret, labels, stats, centroids = cv2.connectedComponentsWithStats(dst)
# define the criteria to stop and refine the corners
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100,
0.001)
corners = cv2.cornerSubPix(gray,np.float32(centroids[1:]),(5,5),
(-1,-1),criteria)
return corners
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray, 5, 3, 0.04)
dst = cv2.dilate(dst, None)
# Get coordinates of the corners.
corners = find_centroids(dst)
for i in range(0, len(corners)):
print("Pixels found for this object are:",corners[i])
image[dst>0.1*dst.max()] = [0,0,255]
cv2.circle(image, (int(corners[i,0]), int(corners[i,1])), 7, (0,255,0), 2)
for corner in corners:
image[int(corner[1]), int(corner[0])] = [0, 0, 255]
a = len(corners)
print("Number of corners found:",a)
#List to store pixel difference.
distance_pixel = []
#List to store mm distance.
distance_mm = []
P1 = corners[0]
P2 = corners[1]
P3 = corners[2]
P4 = corners[3]
P1P2 = cv2.norm(P2-P1)
P1P3 = cv2.norm(P3-P1)
P2P4 = cv2.norm(P4-P2)
P3P4 = cv2.norm(P4-P3)
pixelsPerMetric_width1 = P1P2 / object_width
pixelsPerMetric_width2 = P3P4 / object_width
pixelsPerMetric_height1 = P1P3 / object_height
pixelsPerMetric_height2 = P2P4 / object_height
#Average of PixelsPerMetric
pixelsPerMetric_avg = pixelsPerMetric_width1 + pixelsPerMetric_width2 + pixelsPerMetric_height1 + pixelsPerMetric_height2
pixelsPerMetric = pixelsPerMetric_avg / 4
print(pixelsPerMetric)
P1P2_mm = P1P2 / pixelsPerMetric
P1P3_mm = P1P3 / pixelsPerMetric
P2P4_mm = P2P4 / pixelsPerMetric
P3P4_mm = P3P4 / pixelsPerMetric
distance_mm.append(P1P2_mm)
distance_mm.append(P1P3_mm)
distance_mm.append(P2P4_mm)
distance_mm.append(P3P4_mm)
distance_pixel.append(P1P2)
distance_pixel.append(P1P3)
distance_pixel.append(P2P4)
distance_pixel.append(P3P4)
以像素和毫米打印距离,即您的宽度和高度
print(distance_pixel)
print(distance_mm)
The
pixelsPerMetric
is your scale factor and gives the average number of pixels per mm. You can modify this code to work accordingly to your needs.