估计矩形物体的姿态是不稳定的
Estimating the pose of a rectangular object is unstable
我正在使用以下代码查找矩形对象的旋转和平移矢量。
矩形物体的高度和宽度分别为 33 和 44 厘米。所以我使用下面的代码来创建对象点。
width = 44
height = 33
objPoints = np.array(
[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.001, -(height* 0.001) , 0), (0, -(height* 0.001), 0)]
)
我使用下面的代码来确定旋转、平移向量。
def findPose(imagePoints):
(success, rotation_vector, translation_vector) = cv2.solvePnP(objPoints, imagePoints, camera_matrix,
dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)
print("Rotation Vector:\n {0}".format(rotation_vector))
print("Translation Vector:\n {0}".format(translation_vector))
(end_point2D, jacobian) = cv2.projectPoints(np.array([(0.0, 0.0, 1000.0)]), rotation_vector,
translation_vector, camera_matrix, dist_coeffs)
由于某种原因,结果总是错误的。我是否正确创建了对象点?
我的猜测是您尝试将点从厘米转换为米,但最后两个点却从毫米转换(缩放比例为 0.001
而不是 0.01
)。
我认为您打算使用:
objPoints = np.array(
[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.01, -(height* 0.01) , 0), (0, -(height* 0.01), 0)]
)
我不是摄影测量专家,但我认为解决方案是"scale invariant",所以你可以将坐标缩放1.0
,并得到相同的结果(我不确定)。
我使用了 Head Pose Estimation using OpenCV and Dlib
中的代码示例
我将坐标(按 0.01
而不是 0.001
缩放后)放入 MATLAB 3D 绘图中。
在示例中,我旋转了绘图以接近头部姿势。
代码如下:
import numpy as np
import cv2
width = 44
height = 33
objPoints = np.array(
#[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.001, -(height* 0.001) , 0), (0, -(height* 0.001), 0)]
[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.01, -(height* 0.01) , 0), (0, -(height* 0.01), 0)]
)
# Read Image
im = cv2.imread("img.png");
size = im.shape
#2D image points. If you change the image, you need to change vector
# https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
#image_points = np.array([
# (359, 391), # Nose tip
# (399, 561), # Chin
# (337, 297), # Left eye left corner
# (513, 301), # Right eye right corne
# (345, 465), # Left Mouth corner
# (453, 469) # Right mouth corner
# ], dtype="double")
image_points = np.array([
(273, 100),
(478, 182),
(313, 275),
(107, 190)
], dtype="double")
# 3D model points (from WEB sample).
# https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
#objPoints = np.array([
# (0.0, 0.0, 0.0), # Nose tip
# (0.0, -330.0, -65.0), # Chin
# (-225.0, 170.0, -135.0), # Left eye left corner
# (225.0, 170.0, -135.0), # Right eye right corne
# (-150.0, -150.0, -125.0), # Left Mouth corner
# (150.0, -150.0, -125.0) # Right mouth corner
# ])
# Camera internals
focal_length = size[1]
center = (size[1]/2, size[0]/2)
camera_matrix = np.array(
[[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype = "double"
)
def findPose(imagePoints):
dist_coeffs = np.zeros((4,1)) # Assuming no lens distortion
(success, rotation_vector, translation_vector) = cv2.solvePnP(objPoints, imagePoints, camera_matrix,
dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)
print("Rotation Vector:\n {0}".format(rotation_vector))
print("Translation Vector:\n {0}".format(translation_vector))
(end_point2D, jacobian) = cv2.projectPoints(np.array([(0.0, 0.0, 1000.0)]), rotation_vector,
translation_vector, camera_matrix, dist_coeffs)
# Project a 3D point (0, 0, 1000.0) onto the image plane.
# We use this to draw a line sticking out of the nose
for p in image_points:
cv2.circle(im, (int(p[0]), int(p[1])), 5, (255,0,0), -1)
p1 = ( int(image_points[0][0]), int(image_points[0][1]))
p2 = ( int(end_point2D[0][0][0]), int(end_point2D[0][0][1]))
cv2.line(im, p1, p2, (0,255,0), 3)
findPose(image_points)
# Display image
cv2.imshow("Output", im)
cv2.waitKey(0)
cv2.destroyAllWindows()
结果:
你的post漏掉了信息,所以我真的不能说这个解决方案是否正确。
我正在使用以下代码查找矩形对象的旋转和平移矢量。 矩形物体的高度和宽度分别为 33 和 44 厘米。所以我使用下面的代码来创建对象点。
width = 44
height = 33
objPoints = np.array(
[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.001, -(height* 0.001) , 0), (0, -(height* 0.001), 0)]
)
我使用下面的代码来确定旋转、平移向量。
def findPose(imagePoints):
(success, rotation_vector, translation_vector) = cv2.solvePnP(objPoints, imagePoints, camera_matrix,
dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)
print("Rotation Vector:\n {0}".format(rotation_vector))
print("Translation Vector:\n {0}".format(translation_vector))
(end_point2D, jacobian) = cv2.projectPoints(np.array([(0.0, 0.0, 1000.0)]), rotation_vector,
translation_vector, camera_matrix, dist_coeffs)
由于某种原因,结果总是错误的。我是否正确创建了对象点?
我的猜测是您尝试将点从厘米转换为米,但最后两个点却从毫米转换(缩放比例为 0.001
而不是 0.01
)。
我认为您打算使用:
objPoints = np.array(
[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.01, -(height* 0.01) , 0), (0, -(height* 0.01), 0)]
)
我不是摄影测量专家,但我认为解决方案是"scale invariant",所以你可以将坐标缩放1.0
,并得到相同的结果(我不确定)。
我使用了 Head Pose Estimation using OpenCV and Dlib
中的代码示例我将坐标(按 0.01
而不是 0.001
缩放后)放入 MATLAB 3D 绘图中。
在示例中,我旋转了绘图以接近头部姿势。
代码如下:
import numpy as np
import cv2
width = 44
height = 33
objPoints = np.array(
#[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.001, -(height* 0.001) , 0), (0, -(height* 0.001), 0)]
[(0, 0, 0), (width * 0.01, 0, 0), (width * 0.01, -(height* 0.01) , 0), (0, -(height* 0.01), 0)]
)
# Read Image
im = cv2.imread("img.png");
size = im.shape
#2D image points. If you change the image, you need to change vector
# https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
#image_points = np.array([
# (359, 391), # Nose tip
# (399, 561), # Chin
# (337, 297), # Left eye left corner
# (513, 301), # Right eye right corne
# (345, 465), # Left Mouth corner
# (453, 469) # Right mouth corner
# ], dtype="double")
image_points = np.array([
(273, 100),
(478, 182),
(313, 275),
(107, 190)
], dtype="double")
# 3D model points (from WEB sample).
# https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
#objPoints = np.array([
# (0.0, 0.0, 0.0), # Nose tip
# (0.0, -330.0, -65.0), # Chin
# (-225.0, 170.0, -135.0), # Left eye left corner
# (225.0, 170.0, -135.0), # Right eye right corne
# (-150.0, -150.0, -125.0), # Left Mouth corner
# (150.0, -150.0, -125.0) # Right mouth corner
# ])
# Camera internals
focal_length = size[1]
center = (size[1]/2, size[0]/2)
camera_matrix = np.array(
[[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype = "double"
)
def findPose(imagePoints):
dist_coeffs = np.zeros((4,1)) # Assuming no lens distortion
(success, rotation_vector, translation_vector) = cv2.solvePnP(objPoints, imagePoints, camera_matrix,
dist_coeffs, flags=cv2.SOLVEPNP_ITERATIVE)
print("Rotation Vector:\n {0}".format(rotation_vector))
print("Translation Vector:\n {0}".format(translation_vector))
(end_point2D, jacobian) = cv2.projectPoints(np.array([(0.0, 0.0, 1000.0)]), rotation_vector,
translation_vector, camera_matrix, dist_coeffs)
# Project a 3D point (0, 0, 1000.0) onto the image plane.
# We use this to draw a line sticking out of the nose
for p in image_points:
cv2.circle(im, (int(p[0]), int(p[1])), 5, (255,0,0), -1)
p1 = ( int(image_points[0][0]), int(image_points[0][1]))
p2 = ( int(end_point2D[0][0][0]), int(end_point2D[0][0][1]))
cv2.line(im, p1, p2, (0,255,0), 3)
findPose(image_points)
# Display image
cv2.imshow("Output", im)
cv2.waitKey(0)
cv2.destroyAllWindows()
结果:
你的post漏掉了信息,所以我真的不能说这个解决方案是否正确。