在 iOS 中对 AVCaptureDevice 的输出设置 GrayScale
Set GrayScale on Output of AVCaptureDevice in iOS
我想在我的应用程序中实现自定义相机。所以,我正在使用 AVCaptureDevice
创建这个相机。
现在我只想在我的自定义相机中显示灰色输出。所以我尝试使用 setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:
和 AVCaptureWhiteBalanceGains
来获取它。为此,我正在使用 AVCamManual: Extending AVCam to Use Manual Capture。
- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
NSError *error = nil;
if ( [videoDevice lockForConfiguration:&error] ) {
AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits
[videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
[videoDevice unlockForConfiguration];
}
else {
NSLog( @"Could not lock device for configuration: %@", error );
}
}
但是为此,我必须在 1 到 4 之间传递 RGB 增益值。所以我创建了这个方法来检查 MAX 和 MIN 值。
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = MAX( 1.0, g.redGain );
g.greenGain = MAX( 1.0, g.greenGain );
g.blueGain = MAX( 1.0, g.blueGain );
g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain );
g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );
return g;
}
我也在尝试获得不同的效果,例如传递 RGB 增益静态值。
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = 3;
g.greenGain = 2;
g.blueGain = 1;
return g;
}
现在,我想在我的自定义相机上设置此灰度(公式:像素 = 0.30078125f * R + 0.5859375f * G + 0.11328125f * B)。我已经为这个公式试过了。
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = g.redGain * 0.30078125;
g.greenGain = g.greenGain * 0.5859375;
g.blueGain = g.blueGain * 0.11328125;
float grayScale = g.redGain + g.greenGain + g.blueGain;
g.redGain = MAX( 1.0, grayScale );
g.greenGain = MAX( 1.0, grayScale );
g.blueGain = MAX( 1.0, grayScale );
g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain);
g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );
return g;
}
所以我如何在 1 到 4 之间传递这个值..?
有什么方法或尺度可以比较这些东西..?
如有任何帮助,我们将不胜感激。
CoreImage
提供了大量过滤器,用于使用 GPU 调整图像,并且可以有效地用于来自相机源或视频文件的视频数据。
objc.io 上有一篇文章展示了如何执行此操作。示例在 Objective-C 中,但解释应该足够清楚,可以遵循。
基本步骤是:
- 创建一个
EAGLContext
,配置为使用 OpenGLES2。
- 使用
EAGLContext
创建一个 GLKView
来显示呈现的输出。
- 创建一个
CIContext
,使用相同的 EAGLContext
。
- 使用
CIColorMonochrome
CoreImage filter 创建 CIFilter
。
- 创建一个
AVCaptureSession
和一个 AVCaptureVideoDataOutput
。
- 在
AVCaptureVideoDataOutputDelegate
方法中,将 CMSampleBuffer
转换为 CIImage
。将 CIFilter
应用于图像。将过滤后的图像绘制到 CIImageContext
。
此管道确保视频像素缓冲区保留在 GPU 上(从相机到显示器),并避免将数据移动到 CPU,以保持实时性能。
要保存过滤后的视频,请实施 AVAssetWriter
,并将示例缓冲区附加到完成过滤的同一 AVCaptureVideoDataOutputDelegate
中。
这是 Swift 中的示例。
import UIKit
import GLKit
import AVFoundation
private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5))
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
private var context: CIContext!
private var targetRect: CGRect!
private var session: AVCaptureSession!
private var filter: CIFilter!
@IBOutlet var glView: GLKView!
override func prefersStatusBarHidden() -> Bool {
return true
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
let whiteColor = CIColor(
red: 1.0,
green: 1.0,
blue: 1.0
)
filter = CIFilter(
name: "CIColorMonochrome",
withInputParameters: [
"inputColor" : whiteColor,
"inputIntensity" : 1.0
]
)
// GL context
let glContext = EAGLContext(
API: .OpenGLES2
)
glView.context = glContext
glView.enableSetNeedsDisplay = false
context = CIContext(
EAGLContext: glContext,
options: [
kCIContextOutputColorSpace: NSNull(),
kCIContextWorkingColorSpace: NSNull(),
]
)
let screenSize = UIScreen.mainScreen().bounds.size
let screenScale = UIScreen.mainScreen().scale
targetRect = CGRect(
x: 0,
y: 0,
width: screenSize.width * screenScale,
height: screenSize.height * screenScale
)
// Setup capture session.
let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let videoInput = try? AVCaptureDeviceInput(
device: cameraDevice
)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue())
session = AVCaptureSession()
session.beginConfiguration()
session.addInput(videoInput)
session.addOutput(videoOutput)
session.commitConfiguration()
session.startRunning()
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let originalImage = CIImage(
CVPixelBuffer: pixelBuffer,
options: [
kCIImageColorSpace: NSNull()
]
)
let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform)
filter.setValue(rotatedImage, forKey: kCIInputImageKey)
guard let filteredImage = filter.outputImage else {
return
}
context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent)
glView.display()
}
func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
print("dropped sample buffer: \(seconds)")
}
}
我想在我的应用程序中实现自定义相机。所以,我正在使用 AVCaptureDevice
创建这个相机。
现在我只想在我的自定义相机中显示灰色输出。所以我尝试使用 setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:
和 AVCaptureWhiteBalanceGains
来获取它。为此,我正在使用 AVCamManual: Extending AVCam to Use Manual Capture。
- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
NSError *error = nil;
if ( [videoDevice lockForConfiguration:&error] ) {
AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits
[videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
[videoDevice unlockForConfiguration];
}
else {
NSLog( @"Could not lock device for configuration: %@", error );
}
}
但是为此,我必须在 1 到 4 之间传递 RGB 增益值。所以我创建了这个方法来检查 MAX 和 MIN 值。
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = MAX( 1.0, g.redGain );
g.greenGain = MAX( 1.0, g.greenGain );
g.blueGain = MAX( 1.0, g.blueGain );
g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain );
g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );
return g;
}
我也在尝试获得不同的效果,例如传递 RGB 增益静态值。
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = 3;
g.greenGain = 2;
g.blueGain = 1;
return g;
}
现在,我想在我的自定义相机上设置此灰度(公式:像素 = 0.30078125f * R + 0.5859375f * G + 0.11328125f * B)。我已经为这个公式试过了。
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = g.redGain * 0.30078125;
g.greenGain = g.greenGain * 0.5859375;
g.blueGain = g.blueGain * 0.11328125;
float grayScale = g.redGain + g.greenGain + g.blueGain;
g.redGain = MAX( 1.0, grayScale );
g.greenGain = MAX( 1.0, grayScale );
g.blueGain = MAX( 1.0, grayScale );
g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain);
g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );
return g;
}
所以我如何在 1 到 4 之间传递这个值..?
有什么方法或尺度可以比较这些东西..?
如有任何帮助,我们将不胜感激。
CoreImage
提供了大量过滤器,用于使用 GPU 调整图像,并且可以有效地用于来自相机源或视频文件的视频数据。
objc.io 上有一篇文章展示了如何执行此操作。示例在 Objective-C 中,但解释应该足够清楚,可以遵循。
基本步骤是:
- 创建一个
EAGLContext
,配置为使用 OpenGLES2。 - 使用
EAGLContext
创建一个GLKView
来显示呈现的输出。 - 创建一个
CIContext
,使用相同的EAGLContext
。 - 使用
CIColorMonochrome
CoreImage filter 创建CIFilter
。 - 创建一个
AVCaptureSession
和一个AVCaptureVideoDataOutput
。 - 在
AVCaptureVideoDataOutputDelegate
方法中,将CMSampleBuffer
转换为CIImage
。将CIFilter
应用于图像。将过滤后的图像绘制到CIImageContext
。
此管道确保视频像素缓冲区保留在 GPU 上(从相机到显示器),并避免将数据移动到 CPU,以保持实时性能。
要保存过滤后的视频,请实施 AVAssetWriter
,并将示例缓冲区附加到完成过滤的同一 AVCaptureVideoDataOutputDelegate
中。
这是 Swift 中的示例。
import UIKit
import GLKit
import AVFoundation
private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5))
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
private var context: CIContext!
private var targetRect: CGRect!
private var session: AVCaptureSession!
private var filter: CIFilter!
@IBOutlet var glView: GLKView!
override func prefersStatusBarHidden() -> Bool {
return true
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
let whiteColor = CIColor(
red: 1.0,
green: 1.0,
blue: 1.0
)
filter = CIFilter(
name: "CIColorMonochrome",
withInputParameters: [
"inputColor" : whiteColor,
"inputIntensity" : 1.0
]
)
// GL context
let glContext = EAGLContext(
API: .OpenGLES2
)
glView.context = glContext
glView.enableSetNeedsDisplay = false
context = CIContext(
EAGLContext: glContext,
options: [
kCIContextOutputColorSpace: NSNull(),
kCIContextWorkingColorSpace: NSNull(),
]
)
let screenSize = UIScreen.mainScreen().bounds.size
let screenScale = UIScreen.mainScreen().scale
targetRect = CGRect(
x: 0,
y: 0,
width: screenSize.width * screenScale,
height: screenSize.height * screenScale
)
// Setup capture session.
let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let videoInput = try? AVCaptureDeviceInput(
device: cameraDevice
)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue())
session = AVCaptureSession()
session.beginConfiguration()
session.addInput(videoInput)
session.addOutput(videoOutput)
session.commitConfiguration()
session.startRunning()
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let originalImage = CIImage(
CVPixelBuffer: pixelBuffer,
options: [
kCIImageColorSpace: NSNull()
]
)
let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform)
filter.setValue(rotatedImage, forKey: kCIInputImageKey)
guard let filteredImage = filter.outputImage else {
return
}
context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent)
glView.display()
}
func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
print("dropped sample buffer: \(seconds)")
}
}