2 个 ArUco 标记平面之间的角度

Angle between 2 ArUco markers planes

我想测量 ArUco 标记的角度与第二个参考 ArUco 标记定义的平面的偏差。

参考 ArUco 标记 (M1) 固定在一面平坦的墙上,第二个 ArUco 标记 (M2) 在同一面墙前面几厘米处。我想知道标记M2何时偏离M1的xy平面超过10度。

这里是配置的图示:

为此,我认为我应该计算姿势 rvec 之间的相对旋转,如 post:

中所述

Relative rotation between pose (rvec)

提出以下代码:

def inversePerspective(rvec, tvec):
""" Applies perspective transform for given rvec and tvec. """
    R, _ = cv2.Rodrigues(rvec)
    R = np.matrix(R).T
    invTvec = np.dot(R, np.matrix(-tvec))
    invRvec, _ = cv2.Rodrigues(R)
    return invRvec, invTvec



def relativePosition(rvec1, tvec1, rvec2, tvec2):
""" Get relative position for rvec2 & tvec2. Compose the returned rvec & tvec to use composeRT 
with rvec2 & tvec2 """
    rvec1, tvec1 = rvec1.reshape((3, 1)), tvec1.reshape((3, 1))
    rvec2, tvec2 = rvec2.reshape((3, 1)), tvec2.reshape((3, 1))

    # Inverse the second marker, the right one in the image
    invRvec, invTvec = inversePerspective(rvec2, tvec2)

    info = cv2.composeRT(rvec1, tvec1, invRvec, invTvec)
    composedRvec, composedTvec = info[0], info[1]

    composedRvec = composedRvec.reshape((3, 1))
    composedTvec = composedTvec.reshape((3, 1))
    return composedRvec, composedTvec

计算 composedRvec,我得到以下结果:

两个 ArUco 标记在同一平面(右上角的 composedRvec 值):

两个 ArUco 标记成 90 度角:

我不太明白结果:

当标记在同一平面时,可以使用 0,0,0 composedRvec。

但是为什么在第二种情况下是 0,1.78,0?

当 2 个标记之间的角度超过 10 度时,生成的 composedRvec 应该有什么一般条件来告诉我?

我是否遵循了 composedRvec 的正确策略?

**** 编辑 ***

同一xy平面内40°角的2个标记结果:

||composedRvec||= sqrt(0.619^2+0.529^2+0.711^2)=1.08 rad = 61.87°

**** 编辑 2 ***

通过在 40° 角配置中重新测量,我发现即使不修改设置或照明,数值也有很大波动。时不时地,我会得出正确的价值观:

||composedRvec||= sqrt(0.019^2+0.012^2+0.74^2)=0.74 rad = 42.4° 这是相当准确的。

**** 编辑 3 ***

所以这是我基于@Gilles-Philippe Paillé 编辑过的答案的最终代码:

import numpy as np
import cv2
import cv2.aruco as aruco


cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)  # Get the camera source
img_path='D:/your_path/'
# FILE_STORAGE_READ
cv_file = cv2.FileStorage(img_path+"camera.yml", cv2.FILE_STORAGE_READ)
matrix_coefficients = cv_file.getNode("K").mat()
distortion_coefficients = cv_file.getNode("D").mat()

nb_markers=2

def track(matrix_coefficients, distortion_coefficients):
    while True:

        ret, frame = cap.read()
        # operations on the frame come here
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)  # Change grayscale
        aruco_dict = aruco.custom_dictionary(nb_markers, 5)  

        parameters = aruco.DetectorParameters_create()  # Marker detection parameters
        # lists of ids and the corners beloning to each id
        corners, ids, rejected_img_points = aruco.detectMarkers(gray, 
aruco_dict,parameters=parameters,cameraMatrix=matrix_coefficients,distCoeff=distortion_coefficients)
                                                          

    # store rz1 and rz2
    R_list=[]

    if np.all(ids is not None):  # If there are markers found by detector
        for i in range(0, len(ids)):  # Iterate in markers

        # Estimate pose of each marker and return the values rvec and tvec---different from camera coefficients
            rvec, tvec, markerPoints = aruco.estimatePoseSingleMarkers(corners[i], 0.02, matrix_coefficients,
                                                                   distortion_coefficients)
            (rvec - tvec).any()  # get rid of that nasty numpy value array error



            aruco.drawDetectedMarkers(frame, corners)  # Draw A square around the markers

            aruco.drawAxis(frame, matrix_coefficients, distortion_coefficients, rvec, tvec, 0.01)  # Draw Axis


            R, _ = cv2.Rodrigues(rvec)
            # convert (np.matrix(R).T) matrix to array using np.squeeze(np.asarray()) to get rid off the ValueError: shapes (1,3) and (1,3) not aligned
            R = np.squeeze(np.asarray(np.matrix(R).T))
            R_list.append(R[2])


        # Display the resulting frame


    if len(R_list) == 2:


        print('##############')
        angle_radians = np.arccos(np.dot(R_list[0], R_list[1]))
        angle_degrees=angle_radians*180/np.pi
        print(angle_degrees)


    cv2.imshow('frame', frame)
#   Wait 3 milisecoonds for an interaction. Check the key and do the corresponding job.
    key = cv2.waitKey(3000) & 0xFF
    if key == ord('q'):
     break



track(matrix_coefficients, distortion_coefficients)

下面是一些结果:

红色 -> 实际角度,白色 -> 测量角度

这超出了问题的范围,但我发现姿势估计波动很大。例如,当 2 个标记靠墙时,值可以轻松地从 9° 跳到 37°,而无需接触系统。

结果使用Angle-axis representation,即向量的范数是旋转的角度(你想要的),向量的方向是旋转的轴。

您正在寻找 θ = ||composedRvec||。请注意,结果以弧度为单位。条件为 ||composedRvec|| > 10*π/180.

编辑:要仅考虑两个平面的 Z 轴之间的角度,将两个旋转向量 rvec1rvec2 转换为矩阵并提取第 3 列。那么角度就是angle_radians = np.arccos(np.dot(rz1, rz2))