如何在 iOS 8 中聚焦附近的物体?
How to focus the near by objects in iOS 8?
我正在开发二维码 reader。我的代码长宽各 1 厘米。我正在使用 AVFoundation 元数据来捕获机器可读代码并且它工作正常。但同时我需要拍摄带有徽标的二维码照片(位于二维码中间)。所以我使用 AVCaptureVideoDataOutput
和 didOutputSampleBuffer
来获取静止图像。问题在于图像的清晰度。它在代码和徽标的边缘看起来总是模糊的。所以我对手动控制进行了研究,并为手动对焦做了一些代码更改,但直到现在都没有运气。
- 如何对附近的物体(距离相机10cm且很小)进行对焦?
- 从元数据扫描成功后,我们还有其他方法获取图像吗?
setFocusModeLockedWithLensPosition
和 focusPointOfInterest
有什么区别?
这是我的代码(部分)
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
_session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
_device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:&error]) {
[_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNone];
[_device setFocusModeLockedWithLensPosition:0.5 completionHandler:nil];
//[device setFocusMode:AVCaptureFocusModeAutoFocus];
// _device.focusPointOfInterest = CGPointMake(0.5,0.5);
// device.videoZoomFactor = 1.0 + 10;
[_device unlockForConfiguration];
}
// if ([_device isSmoothAutoFocusEnabled])
// {
// _device.smoothAutoFocusEnabled = NO;
// }
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[_session addInput:input];
// For scanning QR code
AVCaptureMetadataOutput *metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:metaDataOutput];
[metaDataOutput setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];
[metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//For saving the image to camera roll
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[_stillImageOutput setOutputSettings:outputSettings];
[_session addOutput:_stillImageOutput];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:_session];
// Assign session to an ivar.
[self setSession:_session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(@"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(@"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(3.5, 3.5)];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
[self switchONFlashLight];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
/ Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects != nil && [metadataObjects count] > 0) {
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
// if ([_device lockForConfiguration:nil]){
// [_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
// _device.focusPointOfInterest = CGPointMake(metadataObj.bounds.origin.x, metadataObj.bounds.origin.y);
// [_device unlockForConfiguration];
// }
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
[_lblStatus performSelectorOnMainThread:@selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];
}
}
}
仅针对 iOS 8 和最新的 iPhone。
之后进行了回归研究并获得了摄影师的意见。我正在为未来的读者分享我的答案。
- As of iOS8 苹果只提供三种对焦模式。哪些是
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) {
AVCaptureFocusModeLocked = 0,
AVCaptureFocusModeAutoFocus = 1,
AVCaptureFocusModeContinuousAutoFocus = 2,
} NS_AVAILABLE(10_7, 4_0);
要聚焦非常靠近镜头的物体,我们可以使用 AVCaptureAutoFocusRangeRestrictionNear
但由于 iPhone 相机对最小焦距的限制,我无法获得我的代码的清晰图像。
- 据我所知,无法从元数据中获取图像数据。我的问题本身就错了。但是您如何从视频帧中获取图像缓冲区。查看 Capturing Video Frames as UIImage Objects 了解更多信息。
setFocusModeLockedWithLensPosition
将锁定对焦模式,并允许我们设置从 0.0 到 1.0 的特定镜头位置。
focusPointOfInterest
不改变对焦模式,它只会设置对焦点。最好的例子是点击聚焦。
我正在开发二维码 reader。我的代码长宽各 1 厘米。我正在使用 AVFoundation 元数据来捕获机器可读代码并且它工作正常。但同时我需要拍摄带有徽标的二维码照片(位于二维码中间)。所以我使用 AVCaptureVideoDataOutput
和 didOutputSampleBuffer
来获取静止图像。问题在于图像的清晰度。它在代码和徽标的边缘看起来总是模糊的。所以我对手动控制进行了研究,并为手动对焦做了一些代码更改,但直到现在都没有运气。
- 如何对附近的物体(距离相机10cm且很小)进行对焦?
- 从元数据扫描成功后,我们还有其他方法获取图像吗?
setFocusModeLockedWithLensPosition
和focusPointOfInterest
有什么区别?
这是我的代码(部分)
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
_session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
_device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:&error]) {
[_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNone];
[_device setFocusModeLockedWithLensPosition:0.5 completionHandler:nil];
//[device setFocusMode:AVCaptureFocusModeAutoFocus];
// _device.focusPointOfInterest = CGPointMake(0.5,0.5);
// device.videoZoomFactor = 1.0 + 10;
[_device unlockForConfiguration];
}
// if ([_device isSmoothAutoFocusEnabled])
// {
// _device.smoothAutoFocusEnabled = NO;
// }
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[_session addInput:input];
// For scanning QR code
AVCaptureMetadataOutput *metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:metaDataOutput];
[metaDataOutput setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];
[metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//For saving the image to camera roll
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[_stillImageOutput setOutputSettings:outputSettings];
[_session addOutput:_stillImageOutput];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:_session];
// Assign session to an ivar.
[self setSession:_session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(@"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(@"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(3.5, 3.5)];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
[self switchONFlashLight];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
/ Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects != nil && [metadataObjects count] > 0) {
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
// if ([_device lockForConfiguration:nil]){
// [_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
// _device.focusPointOfInterest = CGPointMake(metadataObj.bounds.origin.x, metadataObj.bounds.origin.y);
// [_device unlockForConfiguration];
// }
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
[_lblStatus performSelectorOnMainThread:@selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];
}
}
}
仅针对 iOS 8 和最新的 iPhone。
之后进行了回归研究并获得了摄影师的意见。我正在为未来的读者分享我的答案。
- As of iOS8 苹果只提供三种对焦模式。哪些是
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) { AVCaptureFocusModeLocked = 0, AVCaptureFocusModeAutoFocus = 1, AVCaptureFocusModeContinuousAutoFocus = 2, } NS_AVAILABLE(10_7, 4_0);
要聚焦非常靠近镜头的物体,我们可以使用 AVCaptureAutoFocusRangeRestrictionNear
但由于 iPhone 相机对最小焦距的限制,我无法获得我的代码的清晰图像。
- 据我所知,无法从元数据中获取图像数据。我的问题本身就错了。但是您如何从视频帧中获取图像缓冲区。查看 Capturing Video Frames as UIImage Objects 了解更多信息。
setFocusModeLockedWithLensPosition
将锁定对焦模式,并允许我们设置从 0.0 到 1.0 的特定镜头位置。
focusPointOfInterest
不改变对焦模式,它只会设置对焦点。最好的例子是点击聚焦。