如何在 iOS 上获取相机校准数据?又名 AVCameraCalibrationData
How can I get Camera Calibration Data on iOS? aka AVCameraCalibrationData
据我了解,AVCameraCalibrationData 仅在 AVCaptureDepthDataOutput 上可用。对吗?
另一方面,AVCaptureDepthDataOutput 只能通过 iPhone X 前置摄像头或 iPhone Plus 后置摄像头访问,还是我弄错了?
我想做的是获取 AVCaptureVideoDataOutput SampleBuffer 的 FOV。特别是,它应该匹配所选的预设(全高清、照片等)。
只能从深度数据输出或照片输出中得到AVCameraCalibrationData
但是,如果您只需要 FOV,则您只需要 class 提供的部分信息 — camera intrinsics matrix — 您可以从 AVCaptureVideoDataOutput
中自行获取。
先设置cameraIntrinsicMatrixDeliveryEnabled
on the AVCaptureConnection
connecting your camera device to the capture session. (Note you should check cameraIntrinsicMatrixDeliverySupported
;并非所有捕获格式都支持内部函数。)
当视频输出提供样本缓冲区时,检查每个样本缓冲区的附件是否有 kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix
键。如 CMSampleBuffer.h
中所述(有人应该 file a radar 将此信息纳入在线文档),该附件的值是 CFData
编码 matrix_float3x3
,并且 (0 ,0) 和 (1,1) 该矩阵的元素是以像素为单位的水平和垂直焦距。
背景:当被问及相机校准时,很多这些堆栈溢出响应都引用了内部数据,包括对此 post 的公认答案,但校准数据通常包括内部数据、外部数据、镜头畸变、等等,都列出来了here in the iOS documentation。作者提到他们只是在寻找 FOV,它在样本缓冲区中,而不是在相机校准数据中。所以最终,我认为他的问题得到了回答。但是如果你发现这个问题是在寻找实际的相机校准数据,这会让你失望。就像答案所说的那样,您只能在特定条件下获得校准数据,我将在下面概述。
在我回答其余部分之前,我只想说,如果您只是在寻找内在矩阵,那么这里接受的答案很好,可以比其他更容易获得(即环境不那么严格)这些值 。如果你将它用于计算机视觉,这就是我正在使用它的目的,那么有时这就是所需要的。但对于真正酷的东西,你会想要的!所以我将继续解释如何达到这一点:
我假设您已准备好通用相机应用程序代码。在该代码中,当拍照时,您可能会调用 photoOutput 函数,看起来像这样:
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {...
输出参数将有一个值,您可以参考该值以查看是否支持相机校准,称为 isCameraCalibrationDataDeliverySupported,例如,要打印出来,请使用如下内容:
print("isCameraCalibrationDataDeliverySupported: \(output.isCameraCalibrationDataDeliverySupported)")
在我链接的文档中注意,它仅在特定情况下受支持:
"This property's value can be true only when the
isDualCameraDualPhotoDeliveryEnabled property is true. To enable
camera calibration delivery, set the
isCameraCalibrationDataDeliveryEnabled property in a photo settings
object."
所以这很重要,请注意这一点以避免不必要的压力。使用实际值进行调试并确保启用了正确的环境。
所有这些都准备就绪后,您应该从以下位置获取实际的相机校准数据:
photo.cameraCalibrationData
只需拉出该对象即可获得您要查找的特定值,例如:
photo.cameraCalibrationData?.extrinsicMatrix
photo.cameraCalibrationData?.intrinsicMatrix
photo.cameraCalibrationData?.lensDistortionCenter
etc.
基本上是我上面链接的文档中列出的所有内容。
这里是 swift 5 中的一个更 complete/updated 代码示例,它是根据以前的答案组合在一起的。这将为您提供 iphone.
的相机内在矩阵
基于:
// session setup
captureSession = AVCaptureSession()
let captureVideoDataOutput = AVCaptureVideoDataOutput()
captureSession?.addOutput(captureVideoDataOutput)
// enable the flag
if #available(iOS 11.0, *) {
captureVideoDataOutput.connection(with: .video)?.isCameraIntrinsicMatrixDeliveryEnabled = true
} else {
// ...
}
// `isCameraIntrinsicMatrixDeliveryEnabled` should be set before this
captureSession?.startRunning()
现在在里面 AVCaptureVideoDataOutputSampleBufferDelegate.captureOutput(...)
if #available(iOS 11.0, *) {
if let camData = CMGetAttachment(sampleBuffer, key:kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut:nil) as? Data {
let matrix: matrix_float3x3 = camData.withUnsafeBytes { [=11=].pointee }
print(matrix)
// > simd_float3x3(columns: (SIMD3<Float>(1599.8231, 0.0, 0.0), SIMD3<Float>(0.0, 1599.8231, 0.0), SIMD3<Float>(539.5, 959.5, 1.0)))
}
} else {
// ...
}
Apple 实际上在这里有一个不错的设置说明:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_photos_with_depth
除了 Apple 文档,我在其他任何地方都没有看到的重要说明:
To capture depth maps, you’ll need to first select a builtInDualCamera or builtInTrueDepthCamera capture device as your session’s video input. Even if an iOS device has a dual camera or TrueDepth camera, selecting the default back- or front-facing camera does not enable depth capture.
不是答案,但是...
自从我开始使用代码来制作具有深度的 flutter 插件以来,已经过去了三个星期,这是对我进行工作 PoC 的痛苦试验和错误的快速回顾:
(我对代码质量表示歉意,这也是我第一次使用 objective-c)
- iOS 有大量相机(硬件组合),只有一个子集支持深度数据。当您发现您的设备时:
AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:deviceTypes
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionUnspecified];
你可以询问他们的深度能力:
for (AVCaptureDevice *device in devices) {
BOOL depthDataCapable;
if (@available(iOS 11.0, *)) {
AVCaptureDeviceFormat *activeDepthDataFormat = [device activeDepthDataFormat];
depthDataCapable = (activeDepthDataFormat != nil);
NSLog(@" -- %@ supports DepthData: %s", [device localizedName],
} else {
depthDataCapable = false;
}
}
在 iPhone12 上:
-- Front TrueDepth Camera supports DepthData: true
-- Back Dual Wide Camera supports DepthData: true
-- Back Ultra Wide Camera supports DepthData: false
-- Back Camera supports DepthData: false
-- Front Camera supports DepthData: false
p.s。从历史上看,前置摄像头的质量往往比后置摄像头的质量差,但对于深度捕捉,您无法击败使用红外线的 TrueDepth 摄像头 projector/scanner.
既然您知道哪些相机可以完成这项工作,您需要 select 有能力的相机并启用深度:
(空行是代码遗漏,这不是一个完整的例子)
// this is in your 'post-select-camera' initialization
_captureSession = [[AVCaptureSession alloc] init];
// cameraName is not the localizedName
_captureDevice = [AVCaptureDevice deviceWithUniqueID:cameraName];
// this is in your camera controller initialization
// enable depth delivery in AVCapturePhotoOutput
_capturePhotoOutput = [AVCapturePhotoOutput new];
[_captureSession addOutput:_capturePhotoOutput];
// BOOL depthDataSupported is a property of the controller
_depthDataSupported = [_capturePhotoOutput isDepthDataDeliverySupported];
if (_depthDataSupported) {
[_capturePhotoOutput setDepthDataDeliveryEnabled:YES];
}
[_captureSession addOutput:_capturePhotoOutput];
// this is in your capture method
// enable depth delivery in AVCapturePhotoSettings
AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings];
if (@available(iOS 11.0, *) && _depthDataSupported) {
[settings setDepthDataDeliveryEnabled:YES];
}
// Here I use a try/catch because even depth capable and enabled cameras can crash if settings are not correct.
// For example a very high picture resolution seems to throw an exception, and this might be a different limit for different phone models.
// I am sure this information is somewhere I haven't looked yet.
@try {
[_capturePhotoOutput capturePhotoWithSettings:settings delegate:photoDelegate];
} @catch (NSException *e) {
[settings setDepthDataDeliveryEnabled:NO];
[_capturePhotoOutput capturePhotoWithSettings:settings delegate:photoDelegate];
}
// after you took a photo and
// didFinishProcessingPhoto:(AVCapturePhoto *)photo was invoked
AVDepthData *depthData = [photo depthData];
if (depthData != nil) {
AVCameraCalibrationData *calibrationData = [depthData cameraCalibrationData];
CGFloat pixelSize = [calibrationData pixelSize];
matrix_float3x3 intrinsicMatrix = [calibrationData intrinsicMatrix];
CGSize referenceDimensions = [calibrationData intrinsicMatrixReferenceDimensions];
// now do what you need to do - I need to transform that to 16bit, Grayscale, Tiff, and it starts like this...
if (depthData.depthDataType != kCVPixelFormatType_DepthFloat16) {
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat16];
}
// DON'T FORGET HIT LIKE AND SUBSCRIBE FOR MORE BAD CODE!!! :P
}
据我了解,AVCameraCalibrationData 仅在 AVCaptureDepthDataOutput 上可用。对吗?
另一方面,AVCaptureDepthDataOutput 只能通过 iPhone X 前置摄像头或 iPhone Plus 后置摄像头访问,还是我弄错了?
我想做的是获取 AVCaptureVideoDataOutput SampleBuffer 的 FOV。特别是,它应该匹配所选的预设(全高清、照片等)。
只能从深度数据输出或照片输出中得到AVCameraCalibrationData
但是,如果您只需要 FOV,则您只需要 class 提供的部分信息 — camera intrinsics matrix — 您可以从 AVCaptureVideoDataOutput
中自行获取。
先设置
cameraIntrinsicMatrixDeliveryEnabled
on theAVCaptureConnection
connecting your camera device to the capture session. (Note you should checkcameraIntrinsicMatrixDeliverySupported
;并非所有捕获格式都支持内部函数。)当视频输出提供样本缓冲区时,检查每个样本缓冲区的附件是否有
kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix
键。如CMSampleBuffer.h
中所述(有人应该 file a radar 将此信息纳入在线文档),该附件的值是CFData
编码matrix_float3x3
,并且 (0 ,0) 和 (1,1) 该矩阵的元素是以像素为单位的水平和垂直焦距。
背景:当被问及相机校准时,很多这些堆栈溢出响应都引用了内部数据,包括对此 post 的公认答案,但校准数据通常包括内部数据、外部数据、镜头畸变、等等,都列出来了here in the iOS documentation。作者提到他们只是在寻找 FOV,它在样本缓冲区中,而不是在相机校准数据中。所以最终,我认为他的问题得到了回答。但是如果你发现这个问题是在寻找实际的相机校准数据,这会让你失望。就像答案所说的那样,您只能在特定条件下获得校准数据,我将在下面概述。
在我回答其余部分之前,我只想说,如果您只是在寻找内在矩阵,那么这里接受的答案很好,可以比其他更容易获得(即环境不那么严格)这些值
我假设您已准备好通用相机应用程序代码。在该代码中,当拍照时,您可能会调用 photoOutput 函数,看起来像这样:
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {...
输出参数将有一个值,您可以参考该值以查看是否支持相机校准,称为 isCameraCalibrationDataDeliverySupported,例如,要打印出来,请使用如下内容:
print("isCameraCalibrationDataDeliverySupported: \(output.isCameraCalibrationDataDeliverySupported)")
在我链接的文档中注意,它仅在特定情况下受支持:
"This property's value can be true only when the isDualCameraDualPhotoDeliveryEnabled property is true. To enable camera calibration delivery, set the isCameraCalibrationDataDeliveryEnabled property in a photo settings object."
所以这很重要,请注意这一点以避免不必要的压力。使用实际值进行调试并确保启用了正确的环境。
所有这些都准备就绪后,您应该从以下位置获取实际的相机校准数据:
photo.cameraCalibrationData
只需拉出该对象即可获得您要查找的特定值,例如:
photo.cameraCalibrationData?.extrinsicMatrix
photo.cameraCalibrationData?.intrinsicMatrix
photo.cameraCalibrationData?.lensDistortionCenter
etc.
基本上是我上面链接的文档中列出的所有内容。
这里是 swift 5 中的一个更 complete/updated 代码示例,它是根据以前的答案组合在一起的。这将为您提供 iphone.
的相机内在矩阵基于:
// session setup
captureSession = AVCaptureSession()
let captureVideoDataOutput = AVCaptureVideoDataOutput()
captureSession?.addOutput(captureVideoDataOutput)
// enable the flag
if #available(iOS 11.0, *) {
captureVideoDataOutput.connection(with: .video)?.isCameraIntrinsicMatrixDeliveryEnabled = true
} else {
// ...
}
// `isCameraIntrinsicMatrixDeliveryEnabled` should be set before this
captureSession?.startRunning()
现在在里面 AVCaptureVideoDataOutputSampleBufferDelegate.captureOutput(...)
if #available(iOS 11.0, *) {
if let camData = CMGetAttachment(sampleBuffer, key:kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut:nil) as? Data {
let matrix: matrix_float3x3 = camData.withUnsafeBytes { [=11=].pointee }
print(matrix)
// > simd_float3x3(columns: (SIMD3<Float>(1599.8231, 0.0, 0.0), SIMD3<Float>(0.0, 1599.8231, 0.0), SIMD3<Float>(539.5, 959.5, 1.0)))
}
} else {
// ...
}
Apple 实际上在这里有一个不错的设置说明: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_photos_with_depth
除了 Apple 文档,我在其他任何地方都没有看到的重要说明:
To capture depth maps, you’ll need to first select a builtInDualCamera or builtInTrueDepthCamera capture device as your session’s video input. Even if an iOS device has a dual camera or TrueDepth camera, selecting the default back- or front-facing camera does not enable depth capture.
不是答案,但是...
自从我开始使用代码来制作具有深度的 flutter 插件以来,已经过去了三个星期,这是对我进行工作 PoC 的痛苦试验和错误的快速回顾:
(我对代码质量表示歉意,这也是我第一次使用 objective-c)
- iOS 有大量相机(硬件组合),只有一个子集支持深度数据。当您发现您的设备时:
AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:deviceTypes
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionUnspecified];
你可以询问他们的深度能力:
for (AVCaptureDevice *device in devices) {
BOOL depthDataCapable;
if (@available(iOS 11.0, *)) {
AVCaptureDeviceFormat *activeDepthDataFormat = [device activeDepthDataFormat];
depthDataCapable = (activeDepthDataFormat != nil);
NSLog(@" -- %@ supports DepthData: %s", [device localizedName],
} else {
depthDataCapable = false;
}
}
在 iPhone12 上:
-- Front TrueDepth Camera supports DepthData: true
-- Back Dual Wide Camera supports DepthData: true
-- Back Ultra Wide Camera supports DepthData: false
-- Back Camera supports DepthData: false
-- Front Camera supports DepthData: false
p.s。从历史上看,前置摄像头的质量往往比后置摄像头的质量差,但对于深度捕捉,您无法击败使用红外线的 TrueDepth 摄像头 projector/scanner.
既然您知道哪些相机可以完成这项工作,您需要 select 有能力的相机并启用深度:
(空行是代码遗漏,这不是一个完整的例子)
// this is in your 'post-select-camera' initialization
_captureSession = [[AVCaptureSession alloc] init];
// cameraName is not the localizedName
_captureDevice = [AVCaptureDevice deviceWithUniqueID:cameraName];
// this is in your camera controller initialization
// enable depth delivery in AVCapturePhotoOutput
_capturePhotoOutput = [AVCapturePhotoOutput new];
[_captureSession addOutput:_capturePhotoOutput];
// BOOL depthDataSupported is a property of the controller
_depthDataSupported = [_capturePhotoOutput isDepthDataDeliverySupported];
if (_depthDataSupported) {
[_capturePhotoOutput setDepthDataDeliveryEnabled:YES];
}
[_captureSession addOutput:_capturePhotoOutput];
// this is in your capture method
// enable depth delivery in AVCapturePhotoSettings
AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings];
if (@available(iOS 11.0, *) && _depthDataSupported) {
[settings setDepthDataDeliveryEnabled:YES];
}
// Here I use a try/catch because even depth capable and enabled cameras can crash if settings are not correct.
// For example a very high picture resolution seems to throw an exception, and this might be a different limit for different phone models.
// I am sure this information is somewhere I haven't looked yet.
@try {
[_capturePhotoOutput capturePhotoWithSettings:settings delegate:photoDelegate];
} @catch (NSException *e) {
[settings setDepthDataDeliveryEnabled:NO];
[_capturePhotoOutput capturePhotoWithSettings:settings delegate:photoDelegate];
}
// after you took a photo and
// didFinishProcessingPhoto:(AVCapturePhoto *)photo was invoked
AVDepthData *depthData = [photo depthData];
if (depthData != nil) {
AVCameraCalibrationData *calibrationData = [depthData cameraCalibrationData];
CGFloat pixelSize = [calibrationData pixelSize];
matrix_float3x3 intrinsicMatrix = [calibrationData intrinsicMatrix];
CGSize referenceDimensions = [calibrationData intrinsicMatrixReferenceDimensions];
// now do what you need to do - I need to transform that to 16bit, Grayscale, Tiff, and it starts like this...
if (depthData.depthDataType != kCVPixelFormatType_DepthFloat16) {
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat16];
}
// DON'T FORGET HIT LIKE AND SUBSCRIBE FOR MORE BAD CODE!!! :P
}