在 Swift 中从 AVFoundation 捕获与 AVCaptureVideoPreviewLayer 上的取景器边框相匹配的静止图像

Capturing still image from AVFoundation that matches viewfinder border on AVCaptureVideoPreviewLayer in Swift

拍照后尝试捕捉绿色取景器中的内容。

请看图片:

这是代码当前正在执行的操作:

拍照前:

拍照后(结果图像的比例不正确,因为它与绿色取景器中的不匹配):

如您所见,图像需要按比例放大以适合绿色取景器中最初包含的内容。即使我计算出正确的缩放比例(对于 iPhone 6,我需要将捕获图像的尺寸乘以 1.334,它不起作用)

有什么想法吗?

解决此问题的步骤:

首先,获取全尺寸图像:我还使用了一个名为 "correctlyOriented" 的 UIImage class 扩展。

let correctImage = UIImage(data: imageData!)!.correctlyOriented()

所有这一切都是取消旋转 iPhone 图像,因此肖像图像(使用 iPhone 底部的主页按钮拍摄)的方向符合预期。该扩展名如下:

extension UIImage {

func correctlyOriented() -> UIImage {

if imageOrientation == .up {
    return self
}

// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
var transform = CGAffineTransform.identity

switch imageOrientation {
case .down, .downMirrored:
    transform = transform.translatedBy(x: size.width, y: size.height)
    transform = transform.rotated(by: CGFloat.pi)
case .left, .leftMirrored:
    transform = transform.translatedBy(x: size.width, y: 0)
    transform = transform.rotated(by:  CGFloat.pi * 0.5)
case .right, .rightMirrored:
    transform = transform.translatedBy(x: 0, y: size.height)
    transform = transform.rotated(by:  -CGFloat.pi * 0.5)
default:
    break
}

switch imageOrientation {
case .upMirrored, .downMirrored:
    transform = transform.translatedBy(x: size.width, y: 0)
    transform = transform.scaledBy(x: -1, y: 1)
case .leftMirrored, .rightMirrored:
    transform = transform.translatedBy(x: size.height, y: 0)
    transform = transform.scaledBy(x: -1, y: 1)
default:
    break
}

// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
guard
    let cgImage = cgImage,
    let colorSpace = cgImage.colorSpace,
    let context = CGContext(data: nil,
                            width: Int(size.width),
                            height: Int(size.height),
                            bitsPerComponent: cgImage.bitsPerComponent,
                            bytesPerRow: 0,
                            space: colorSpace,
                            bitmapInfo: cgImage.bitmapInfo.rawValue) else {
                                return self
}

context.concatenate(transform)

switch imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
    context.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.height, height: size.width))
default:
    context.draw(cgImage, in: CGRect(origin: .zero, size: size))
}

// And now we just create a new UIImage from the drawing context
guard let rotatedCGImage = context.makeImage() else {
    return self
}

return UIImage(cgImage: rotatedCGImage)
}

接下来计算身高系数:

let heightFactor = self.view.frame.height / correctImage.size.height

根据高度因子创建一个新的CGSize,然后调整图像大小(使用调整图像大小的函数,未显示):

let newSize = CGSize(width: correctImage.size.width * heightFactor, height: correctImage.size.height * heightFactor)

let correctResizedImage = self.imageWithImage(image: correctImage, scaledToSize: newSize)

现在,由于 iPhone 相机的 4:3 纵横比与 16:9 纵横比,我们有一张与我们的设备高度相同但更宽的图像iPhone 屏幕。因此,将图像裁剪为与设备屏幕大小相同:

let screenCrop: CGRect = CGRect(x: (newSize.width - self.view.bounds.width) * 0.5,
                                                y: 0,
                                                width: self.view.bounds.width,
                                                height: self.view.bounds.height)


var correctScreenCroppedImage = self.crop(image: correctResizedImage, to: screenCrop)

最后,我们需要复制绿色 "viewfinder" 创建的 "crop"。因此,我们执行另一次裁剪以使最终图像匹配:

let correctCrop: CGRect = CGRect(x: 0,
                                          y: (correctScreenCroppedImage!.size.height * 0.5) - (correctScreenCroppedImage!.size.width * 0.5),
                                          width: correctScreenCroppedImage!.size.width,
                                          height: correctScreenCroppedImage!.size.width)

var correctCroppedImage = self.crop(image: correctScreenCroppedImage!, to: correctCrop)

此答案归功于 @damirstuhec