AVCam 自定义预览层
AVCam Customize PreviewLayer
这是我第一次使用 iOS 相机。
我试图创建一个只能拍摄照片(静止图像)的简单应用程序。
我正在使用来自 wwdc 的代码:
https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112-Intro-DontLinkElementID_2
我想创建自定义照片尺寸,如图所示:
在此处输入图片描述
但结果是:
enter image description here
如何固定到正方形的大小?
谢谢!
编辑:
我附上了结果的图片。 enter image description here
我该如何解决?
编辑 2:
CMPCameraViewController:
- (void)viewDidLoad
{
[super viewDidLoad];
// Disable UI. The UI is enabled if and only if the session starts running.
self.stillButton.enabled = NO;
// Create the AVCaptureSession.
self.session = [[AVCaptureSession alloc] init];
// Setup the preview view.
self.previewView.session = self.session;
// Communicate with the session and other session objects on this queue.
self.sessionQueue = dispatch_queue_create( "session queue", DISPATCH_QUEUE_SERIAL );
self.setupResult = AVCamSetupResultSuccess;
// Setup the capture session.
// In general it is not safe to mutate an AVCaptureSession or any of its inputs, outputs, or connections from multiple threads at the same time.
// Why not do all of this on the main queue?
// Because -[AVCaptureSession startRunning] is a blocking call which can take a long time. We dispatch session setup to the sessionQueue
// so that the main queue isn't blocked, which keeps the UI responsive.
dispatch_async( self.sessionQueue, ^{
if ( self.setupResult != AVCamSetupResultSuccess ) {
return;
}
self.backgroundRecordingID = UIBackgroundTaskInvalid;
NSError *error = nil;
AVCaptureDevice *videoDevice = [CMPCameraViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if ( ! videoDeviceInput ) {
NSLog( @"Could not create video device input: %@", error );
}
[self.session beginConfiguration];
if ( [self.session canAddInput:videoDeviceInput] ) {
[self.session addInput:videoDeviceInput];
self.videoDeviceInput = videoDeviceInput;
dispatch_async( dispatch_get_main_queue(), ^{
// Why are we dispatching this to the main queue?
// Because AVCaptureVideoPreviewLayer is the backing layer for AAPLPreviewView and UIView
// can only be manipulated on the main thread.
// Note: As an exception to the above rule, it is not necessary to serialize video orientation changes
// on the AVCaptureVideoPreviewLayer’s connection with other session manipulation.
// Use the status bar orientation as the initial video orientation. Subsequent orientation changes are handled by
// -[viewWillTransitionToSize:withTransitionCoordinator:].
UIInterfaceOrientation statusBarOrientation = [UIApplication sharedApplication].statusBarOrientation;
AVCaptureVideoOrientation initialVideoOrientation = AVCaptureVideoOrientationPortrait;
if ( statusBarOrientation != UIInterfaceOrientationUnknown ) {
initialVideoOrientation = (AVCaptureVideoOrientation)statusBarOrientation;
}
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.previewView.layer;
previewLayer.connection.videoOrientation = initialVideoOrientation;
previewLayer.bounds = _previewView.frame;
//previewLayer.connection.videoOrientation = UIInterfaceOrientationLandscapeLeft;
} );
}
else {
NSLog( @"Could not add video device input to the session" );
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if ( ! audioDeviceInput ) {
NSLog( @"Could not create audio device input: %@", error );
}
if ( [self.session canAddInput:audioDeviceInput] ) {
[self.session addInput:audioDeviceInput];
}
else {
NSLog( @"Could not add audio device input to the session" );
}
AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if ( [self.session canAddOutput:movieFileOutput] ) {
[self.session addOutput:movieFileOutput];
AVCaptureConnection *connection = [movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
if ( connection.isVideoStabilizationSupported ) {
connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
}
self.movieFileOutput = movieFileOutput;
}
else {
NSLog( @"Could not add movie file output to the session" );
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ( [self.session canAddOutput:stillImageOutput] ) {
stillImageOutput.outputSettings = @{AVVideoCodecKey : AVVideoCodecJPEG};
[self.session addOutput:stillImageOutput];
self.stillImageOutput = stillImageOutput;
}
else {
NSLog( @"Could not add still image output to the session" );
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
[self.session commitConfiguration];
} );
}
CMPPreviewView:
+ (Class)layerClass
{
return [AVCaptureVideoPreviewLayer class];
}
- (AVCaptureSession *)session
{
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer;
return previewLayer.session;
}
- (void)setSession:(AVCaptureSession *)session
{
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer;
previewLayer.session = session;
((AVPlayerLayer *)[self layer]).videoGravity = AVLayerVideoGravityResize;
}
Apple 的 AVCam 代码是进入摄影开发的一个很好的起点。
您要做的是修改视频预览图层的大小。这是通过更改 videoGravity 设置来完成的。这是纵横比填充类型查看的示例:
[Swift 3]
previewView.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
现在,对于填充到矩形的情况,您需要定义层边界然后使用 AVLayerVideoGravityResize
。
请注意:这不会影响拍摄照片的尺寸。它只是修改视频预览层的大小。这是一个重要的区别。要修改拍摄照片的大小,您需要执行裁剪操作(可以通过多种方式轻松完成),但这似乎不是您的意图。
祝你好运。
编辑: 现在,您似乎有兴趣裁剪捕获的 UIImage。
[Swift 3]
// I'm going to assume you've done something like this to store the captured data to a UIImage object
//If not, I would do so
let myImage = UIImage(data: capturedImageData)!
// using core graphics (the cg in cgImage) you can perform all kinds of image manipulations--crop, rotation, mirror, etc.
// here's crop to a rectangle--fill in with your desired values
let myRect = CGRect(x: ..., y: ..., width: ..., height: ...)
myImage = myImage.cgImage?.cropping(to: myRect)
希望这能回答您的问题。
这是我第一次使用 iOS 相机。 我试图创建一个只能拍摄照片(静止图像)的简单应用程序。 我正在使用来自 wwdc 的代码:
https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112-Intro-DontLinkElementID_2
我想创建自定义照片尺寸,如图所示:
在此处输入图片描述
但结果是: enter image description here
如何固定到正方形的大小?
谢谢!
编辑: 我附上了结果的图片。 enter image description here 我该如何解决?
编辑 2:
CMPCameraViewController:
- (void)viewDidLoad
{
[super viewDidLoad];
// Disable UI. The UI is enabled if and only if the session starts running.
self.stillButton.enabled = NO;
// Create the AVCaptureSession.
self.session = [[AVCaptureSession alloc] init];
// Setup the preview view.
self.previewView.session = self.session;
// Communicate with the session and other session objects on this queue.
self.sessionQueue = dispatch_queue_create( "session queue", DISPATCH_QUEUE_SERIAL );
self.setupResult = AVCamSetupResultSuccess;
// Setup the capture session.
// In general it is not safe to mutate an AVCaptureSession or any of its inputs, outputs, or connections from multiple threads at the same time.
// Why not do all of this on the main queue?
// Because -[AVCaptureSession startRunning] is a blocking call which can take a long time. We dispatch session setup to the sessionQueue
// so that the main queue isn't blocked, which keeps the UI responsive.
dispatch_async( self.sessionQueue, ^{
if ( self.setupResult != AVCamSetupResultSuccess ) {
return;
}
self.backgroundRecordingID = UIBackgroundTaskInvalid;
NSError *error = nil;
AVCaptureDevice *videoDevice = [CMPCameraViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if ( ! videoDeviceInput ) {
NSLog( @"Could not create video device input: %@", error );
}
[self.session beginConfiguration];
if ( [self.session canAddInput:videoDeviceInput] ) {
[self.session addInput:videoDeviceInput];
self.videoDeviceInput = videoDeviceInput;
dispatch_async( dispatch_get_main_queue(), ^{
// Why are we dispatching this to the main queue?
// Because AVCaptureVideoPreviewLayer is the backing layer for AAPLPreviewView and UIView
// can only be manipulated on the main thread.
// Note: As an exception to the above rule, it is not necessary to serialize video orientation changes
// on the AVCaptureVideoPreviewLayer’s connection with other session manipulation.
// Use the status bar orientation as the initial video orientation. Subsequent orientation changes are handled by
// -[viewWillTransitionToSize:withTransitionCoordinator:].
UIInterfaceOrientation statusBarOrientation = [UIApplication sharedApplication].statusBarOrientation;
AVCaptureVideoOrientation initialVideoOrientation = AVCaptureVideoOrientationPortrait;
if ( statusBarOrientation != UIInterfaceOrientationUnknown ) {
initialVideoOrientation = (AVCaptureVideoOrientation)statusBarOrientation;
}
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.previewView.layer;
previewLayer.connection.videoOrientation = initialVideoOrientation;
previewLayer.bounds = _previewView.frame;
//previewLayer.connection.videoOrientation = UIInterfaceOrientationLandscapeLeft;
} );
}
else {
NSLog( @"Could not add video device input to the session" );
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if ( ! audioDeviceInput ) {
NSLog( @"Could not create audio device input: %@", error );
}
if ( [self.session canAddInput:audioDeviceInput] ) {
[self.session addInput:audioDeviceInput];
}
else {
NSLog( @"Could not add audio device input to the session" );
}
AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if ( [self.session canAddOutput:movieFileOutput] ) {
[self.session addOutput:movieFileOutput];
AVCaptureConnection *connection = [movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
if ( connection.isVideoStabilizationSupported ) {
connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
}
self.movieFileOutput = movieFileOutput;
}
else {
NSLog( @"Could not add movie file output to the session" );
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ( [self.session canAddOutput:stillImageOutput] ) {
stillImageOutput.outputSettings = @{AVVideoCodecKey : AVVideoCodecJPEG};
[self.session addOutput:stillImageOutput];
self.stillImageOutput = stillImageOutput;
}
else {
NSLog( @"Could not add still image output to the session" );
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
[self.session commitConfiguration];
} );
}
CMPPreviewView:
+ (Class)layerClass
{
return [AVCaptureVideoPreviewLayer class];
}
- (AVCaptureSession *)session
{
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer;
return previewLayer.session;
}
- (void)setSession:(AVCaptureSession *)session
{
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer;
previewLayer.session = session;
((AVPlayerLayer *)[self layer]).videoGravity = AVLayerVideoGravityResize;
}
Apple 的 AVCam 代码是进入摄影开发的一个很好的起点。
您要做的是修改视频预览图层的大小。这是通过更改 videoGravity 设置来完成的。这是纵横比填充类型查看的示例:
[Swift 3]
previewView.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
现在,对于填充到矩形的情况,您需要定义层边界然后使用 AVLayerVideoGravityResize
。
请注意:这不会影响拍摄照片的尺寸。它只是修改视频预览层的大小。这是一个重要的区别。要修改拍摄照片的大小,您需要执行裁剪操作(可以通过多种方式轻松完成),但这似乎不是您的意图。
祝你好运。
编辑: 现在,您似乎有兴趣裁剪捕获的 UIImage。
[Swift 3]
// I'm going to assume you've done something like this to store the captured data to a UIImage object
//If not, I would do so
let myImage = UIImage(data: capturedImageData)!
// using core graphics (the cg in cgImage) you can perform all kinds of image manipulations--crop, rotation, mirror, etc.
// here's crop to a rectangle--fill in with your desired values
let myRect = CGRect(x: ..., y: ..., width: ..., height: ...)
myImage = myImage.cgImage?.cropping(to: myRect)
希望这能回答您的问题。