相机校准,焦距值似乎太大

Camera calibration, focal length value seems too large

我尝试使用 python 和 opencv 进行相机校准以找到相机矩阵。我使用了 link

中的以下代码

https://automaticaddison.com/how-to-perform-camera-calibration-using-opencv/

import cv2 # Import the OpenCV library to enable computer vision
import numpy as np # Import the NumPy scientific computing library
import glob # Used to get retrieve files that have a specified pattern
 
# Path to the image that you want to undistort
distorted_img_filename = r'C:\Users\uid20832.jpg'
 
# Chessboard dimensions
number_of_squares_X = 10 # Number of chessboard squares along the x-axis
number_of_squares_Y = 7  # Number of chessboard squares along the y-axis
nX = number_of_squares_X - 1 # Number of interior corners along x-axis
nY = number_of_squares_Y - 1 # Number of interior corners along y-axis
 
# Store vectors of 3D points for all chessboard images (world coordinate frame)
object_points = []
 
# Store vectors of 2D points for all chessboard images (camera coordinate frame)
image_points = []
 
# Set termination criteria. We stop either when an accuracy is reached or when
# we have finished a certain number of iterations.
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
 
# Define real world coordinates for points in the 3D coordinate frame
# Object points are (0,0,0), (1,0,0), (2,0,0) ...., (5,8,0)
object_points_3D = np.zeros((nX * nY, 3), np.float32)       
 
# These are the x and y coordinates                                              
object_points_3D[:,:2] = np.mgrid[0:nY, 0:nX].T.reshape(-1, 2) 
 
def main():
     
  # Get the file path for images in the current directory
  images = glob.glob(r'C:\Users\Kalibrierung\*.jpg')
     
  # Go through each chessboard image, one by one
  for image_file in images:
  
    # Load the image
    image = cv2.imread(image_file)  
 
    # Convert the image to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  
 
    # Find the corners on the chessboard
    success, corners = cv2.findChessboardCorners(gray, (nY, nX), None)
     
    # If the corners are found by the algorithm, draw them
    if success == True:
 
      # Append object points
      object_points.append(object_points_3D)
 
      # Find more exact corner pixels       
      corners_2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)       
       
            # Append image points
      image_points.append(corners)
 
      # Draw the corners
      cv2.drawChessboardCorners(image, (nY, nX), corners_2, success)
 
      # Display the image. Used for testing.
      #cv2.imshow("Image", image) 
     
      # Display the window for a short period. Used for testing.
      #cv2.waitKey(200) 
                                                                                                                     
  # Now take a distorted image and undistort it 
  distorted_image = cv2.imread(distorted_img_filename)
 
  # Perform camera calibration to return the camera matrix, distortion coefficients, rotation and translation vectors etc 
  ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(object_points, 
                                                    image_points, 
                                                    gray.shape[::-1], 
                                                    None, 
                                                    None)

但我想我总是得到错误的参数。校准后我的焦距在 x 和 y 方向上约为 1750。我认为这不可能是正确的,它几乎是。相机文档说焦距在 4-7 毫米之间。但我不确定,为什么校准后的值如此之高。这是我的一些校准照片。也许他们出了什么问题。我在不同的方向、角度和高度移动了相机下的棋盘。

我也想知道,为什么我不需要代码中方块的大小。有人可以向我解释一下吗?还是我忘记了这个输入?

你的误解是关于“焦距”的。这是一个超载的术语。

  • 光学部分的“焦距”(单位mm):描述镜头平面与image/sensor平面的距离
  • 相机矩阵中的“焦距”(单位像素):描述了将现实世界映射到一定分辨率的图片的比例因子

1750 很可能是正确的,如果你有一张高分辨率的图片(全高清或其他)。

计算如下:

f [pixels] = (focal length [mm]) / (pixel pitch [µm / pixel])

(注意单位和前缀,1 mm = 1000 µm)

示例:Pixel 4a phone,像素间距为 1.40 µm,焦距为 4.38 mm,f = ~3128.57 (= fx = fy)。

另一个例子:Pixel 4a 的对角线视野约为 77.7 度,分辨率为 4032 x 3024 像素,因此对角线有 5040 像素。你可以计算:

f = (5040 / 2) / tan(~77.7° / 2)

f = ~3128.6 [pixels]

您可以将该计算应用于已知视野和图片尺寸的任意相机。如果对角线分辨率不明确,请使用 水平 FoV 和水平分辨率。如果传感器不是 16:9 但您从中获取的视频被 裁剪 到 16:9... 假设裁剪仅垂直裁剪,则可能会发生这种情况,并且单独留下水平。


为什么这段代码中不需要棋盘方块的大小?因为它只标定了内在参数(相机矩阵和畸变系数)。这些不取决于到板或场景中任何其他对象的距离。

如果您要校准外部参数,即立体设置中相机的距离,然后您需要给出方块的大小。