AvCapture / AVCaptureVideoPreviewLayer 无法获得正确的可见图像

AvCapture / AVCaptureVideoPreviewLayer troubles getting the correct visible image

我目前在从 AVCaptureAVCaptureVideoPreviewLayer 获得我想要的东西时遇到一些 巨大 的麻烦等等

我目前正在创建我想要的应用程序(可用于 Iphone 设备,但如果它也适用于 ipad 会更好)将我的相机的小预览放在我的视图中间,如下图所示:

为此,我想保持相机的比例,所以我使用了这个配置:

rgbaImage = nil;

NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device = [possibleDevices firstObject];
if (!device) return;

AVCaptureSession *session = [[AVCaptureSession alloc] init];
self.captureSession = session;
self.captureDevice = device;

NSError *error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if( !input )
{
    [[[UIAlertView alloc] initWithTitle:NSLocalizedString(@"NoCameraAuthorizationTitle", nil)
                                message:NSLocalizedString(@"NoCameraAuthorizationMsg", nil)
                               delegate:self
                      cancelButtonTitle:NSLocalizedString(@"OK", nil)
                      otherButtonTitles:nil] show];
    return;
}

[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
[session addInput:input];

AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)}];

[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:dataOutput];

self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:self.stillImageOutput];

connection = [dataOutput.connections firstObject];
[self setupCameraOrientation];

NSError *errorLock;
if ([device lockForConfiguration:&errorLock])
{
//        Frame rate
    device.activeVideoMinFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);
    device.activeVideoMaxFrameDuration = CMTimeMake((int64_t)1, (int32_t)FPS);

    AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
    AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;

    CGPoint point = CGPointMake(0.5, 0.5);
    if ([device isAutoFocusRangeRestrictionSupported])
    {
        device.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
    }
    if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode])
    {
        [device setFocusPointOfInterest:point];
        [device setFocusMode:focusMode];
    }
    if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:exposureMode])
    {
        [device setExposurePointOfInterest:point];
        [device setExposureMode:exposureMode];
    }
    if ([device isLowLightBoostSupported])
    {
        device.automaticallyEnablesLowLightBoostWhenAvailable = YES;
    }
    [device unlockForConfiguration];
}

if (device.isFlashAvailable)
{
    [device lockForConfiguration:nil];
    [device setFlashMode:AVCaptureFlashModeOff];
    [device unlockForConfiguration];

    if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
    {
        [device lockForConfiguration:nil];
        [device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
        [device unlockForConfiguration];
    }
}

previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = self.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.layer insertSublayer:previewLayer atIndex:0];

[session commitConfiguration];

如您所见,我正在使用 AVLayerVideoGravityResizeAspectFill 属性来确保我确实拥有正确的比例。

我的麻烦从这里开始,因为我尝试了很多事情但从未真正成功过。 我的目标是让图片相当于用户可以在 previewLayer 中看到的图片。知道视频帧提供的图像比您在预览中看到的图像更大。

我尝试了 3 种方法:

1) 使用个人计算 : 因为我知道视频帧大小和我的屏幕大小以及图层大小和位置,所以我尝试计算比率并用它来计算视频帧中的等效位置。我实际上发现视频帧(sampleBuffer)以像素为单位,而我从 mainScreen 边界获得的位置是苹果测量并且必须乘以一个比率才能得到它以像素为单位这是我的比率,假设视频帧大小是实际设备全屏尺寸。

--> 这实际上给了我 IPAD 一个非常好的结果,高度和宽度都很好但是 (x,y) 原点与原来的有点移动......(详细: 实际上,如果我从我发现的位置移除 72 个像素,我会得到良好的输出)

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{
if (self.forceStop) return;
if (_isStopped || _isCapturing || !CMSampleBufferIsValid(sampleBuffer)) return;

CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
__block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CGRect rect = image.extent;

    CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width/* * [UIScreen mainScreen].scale*/;
CGFloat screenHeight = screenRect.size.height/* * [UIScreen mainScreen].scale*/;
NSLog(@"%f, %f ---",screenWidth, screenHeight);

float myRatio = ( rect.size.height / screenHeight );
float myRatioW = ( rect.size.width / screenWidth );
NSLog(@"Ratio w :%f h:%f ---",myRatioW, myRatio);


CGPoint p = [captureViewControler.view convertPoint:previewLayer.frame.origin toView:nil];
NSLog(@"-Av-> %f, %f --> %f, %f", p.x, p.y, self.bounds.size.height, self.bounds.size.width);
rect.origin = CGPointMake(p.x * myRatioW, p.y  * myRatio);

NSLog(@"%f, %f ----> %f %f", rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
NSLog(@"%f",  previewLayer.frame.size.height * (  rect.size.height / screenHeight  ));
rect.size = CGSizeMake(rect.size.width, previewLayer.frame.size.height * myRatio);

image = [image imageByCroppingToRect:rect];
its = [ImageUtils cropImageToRect:uiImage(sampleBuffer) toRect:rect];
NSLog(@"--------------------------------------------");
[captureViewControler sendToPreview:its];
}

2) 使用 StillImage 捕获:只要我在 IPAD 上,这种方法实际上就有效。但真正的麻烦是我正在使用这些裁剪帧来提供图像库,并且方法 captureStillImageAsynchronouslyFromConnection 正在为图片调用系统声音(我读了很多关于 "solutions"就像调用另一个声音来避免它等等等等但没有工作并且实际上没有解决它在 iphone 6 上伴随它的冻结)所以这个方法似乎不合适。

AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connect in self.stillImageOutput.connections)
{
    for (AVCaptureInputPort *port in [connect inputPorts])
    {
        if ([[port mediaType] isEqual:AVMediaTypeVideo] )
        {
            videoConnection = connect;
            break;
        }
    }
    if (videoConnection) { break; }
}

[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
                                                   completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
 {
     if (error)
     {
         NSLog(@"Take picture failed");
     }
     else
     {
         NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
         UIImage *takenImage = [UIImage imageWithData:jpegData];

         CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
         NSLog(@"image cropped : %@", NSStringFromCGRect(outputRect));
         CGImageRef takenCGImage = takenImage.CGImage;
         size_t width = CGImageGetWidth(takenCGImage);
         size_t height = CGImageGetHeight(takenCGImage);
         NSLog(@"Size cropped : w: %zu h: %zu", width, height);
         CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
         NSLog(@"final cropped : %@", NSStringFromCGRect(cropRect));

         CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
         takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
         CGImageRelease(cropCGImage);

         its = [ImageUtils rotateUIImage:takenImage];
         image = [[CIImage alloc] initWithImage:its];
}

3) 使用具有比率的元数据输出:这实际上根本不起作用,但我认为它对我的帮助最大,因为它在 stillImage 过程中使用它(使用metadataOutputRectOfInterestForRect 结果得到 pourcentage,然后将其与比率结合)。我想使用它并添加图片之间的比率差异以获得正确的输出。

CGRect rect = image.extent;
CGSize size = CGSizeMake(1936.0, 2592.0);

float rh = (size.height / rect.size.height);
float rw = (size.width / rect.size.width);

CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
NSLog(@"avant cropped : %@", NSStringFromCGRect(outputRect));
outputRect.origin.x = MIN(1.0, outputRect.origin.x * rw);
outputRect.origin.y = MIN(1.0, outputRect.origin.y * rh);
outputRect.size.width = MIN(1.0, outputRect.size.width * rw);
outputRect.size.height = MIN(1.0, outputRect.size.height * rh);
NSLog(@"final cropped : %@", NSStringFromCGRect(outputRect));

UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
NSLog(@"takenImage : %@", NSStringFromCGSize(takenImage.size));

CGImageRef takenCGImage = [[CIContext contextWithOptions:nil] createCGImage:image fromRect:[image extent]];
size_t width = CGImageGetWidth(takenCGImage);
size_t height = CGImageGetHeight(takenCGImage);
NSLog(@"Size cropped : w: %zu h: %zu", width, height);
CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];

我希望有人能够帮助我解决这个问题。 非常感谢。

我终于找到了使用此代码的解决方案。我的错误是尝试使用图像之间的比率,而不考虑 metadataOutputRectOfInterestForRect returns 不需要为新的其他图像更改的百分比值。

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)avConnection
{    
    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
    __block CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];

    CGRect outputRect = [previewLayer metadataOutputRectOfInterestForRect:previewLayer.bounds];
    outputRect.origin.y = outputRect.origin.x;
    outputRect.origin.x = 0;
    outputRect.size.height = outputRect.size.width;
    outputRect.size.width = 1;

    UIImage *takenImage = [[UIImage alloc] initWithCIImage:image];
    CGImageRef takenCGImage = [cicontext createCGImage:image fromRect:[image extent]];
    size_t width = CGImageGetWidth(takenCGImage);
    size_t height = CGImageGetHeight(takenCGImage);
    CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);
    CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
    UIImage *its = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
}