如何从 AVCaptureSession 生成的图像中获取精确大小的帧?

How to get an exact size frame from image produced by AVCaptureSession?

我正在开发一个应用程序,它可以捕获 AVCaptureSession 内部的特定视图,如上图所示。

我正在使用 AVCaptureStillImageOutputAVCaptureSession 中拍摄图像。问题是我得到的图像具有特定大小,即 ({2448, 3264})。我的解决方案是将此图像转换为与我的背景视图相同的帧,以便具有相同的坐标和帧。

使用 imageWithImage,我使用了与 captureView 相同的框架,并且一切正常。 resizedImage 最终为 {768, 1024},这与 AVCaptureSession 的大小相同。

从现在开始,基于这个坐标,我尝试使用 CGImageCreateWithImageInRect 基于 captureView 的框架(绿色视图)裁剪图像。

我得到的输出图像是关闭的。我的问题是有没有比 CGImageCreateWithImageInRect 更好的方法来从我从 AVCaptureSession 返回的图像中捕获我想要的确切视图?有没有更好的方法来做我想实现的事情?任何帮助将不胜感激。提前致谢!

  AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;

                //Handle orientation for video

                if(videoConnection.supportsVideoOrientation)
                {
                    if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationLandscapeLeft ){
                        videoConnection.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
                    }
                    if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationLandscapeRight ){
                        videoConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
                    }
                    if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationPortrait ){
                        videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
                    }
                }
                break;
            }
        }
        if (videoConnection) { break; }
    }

    NSLog(@"about to request a capture from: %@", stillImageOutput);
    __weak typeof(self) weakSelf = self;
    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {

        NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
        UIImage *image = [[UIImage alloc] initWithData:imageData];
        UIImage *resizedImage = [weakSelf imageWithImage:image scaledToSize:outputImageView.frame.size];

        //Image view to test screenshot of AVCaptureSession
        outputImageView.image = resizedImage;

        //Screenshot of captureView frame (green view)
        CGRect captureFrame = captureView.frame;

        CGImageRef cropRef = CGImageCreateWithImageInRect(resizedImage.CGImage, captureFrame);
        UIImage* cropImage = [UIImage imageWithCGImage:cropRef];

// Image view to test cropped image
        sampleImageView.image = cropImage;

        //Hide Indicator
        [weakSelf hideActivityView];

    }];

- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
    //UIGraphicsBeginImageContext(newSize);
    // In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
    // Pass 1.0 to force exact pixel size.
    UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
    [image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}

用于捕获图像的方法。

如评论所述,我通过将像素缩放比例更改为 1.0 解决了我的问题。

- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
    //UIGraphicsBeginImageContext(newSize);
    // In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
    // Pass 1.0 to force exact pixel size.
    UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
    [image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}