CIContext初始化崩溃
CIContext initialization crash
背景:
我是 运行 具有以下两个选项的 swift 2 应用程序。
选项A:
用户可以输入数字进行登录。在本例中,his/her 图片显示在 UIImageView 中。
选项 B:
用户可以使用 NFC 标签登录。在这种情况下,UIImageView 被替换为显示实时摄像头流的摄像头层,并使用 CIContext 在按下按钮时捕获图像。
问题:
我面临的问题是,有时,当我选择选项 A(不使用相机层)时,应用程序会崩溃。由于我无法确定地重现崩溃,因此我无法理解应用崩溃的原因。
编辑:相机层在两个选项中都使用,但隐藏在选项 A 中。
Crashlytics 生成以下崩溃日志:
0 libswiftCore.dylib specialized _fatalErrorMessage(StaticString, StaticString, StaticString, UInt) -> () + 44
1 CameraLayerView.swift line 20 CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
2 CameraLayerView.swift line 0 @objc CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
3 UIKit -[UIClassSwapper initWithCoder:] + 248
32 UIKit UIApplicationMain + 208
33 AppDelegate.swift line 17 main
34 libdispatch.dylib (Missing)
我检查了 CameraLayerView 中的第 20 行,但它只是一个初始化语句
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
下面提到的是CameraLayerView文件。任何帮助将不胜感激
var captureSession = AVCaptureSession()
var sessionOutput = AVCaptureVideoDataOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
private var pixelBuffer : CVImageBuffer!
private var attachments : CFDictionary!
private var ciImage : CIImage!
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
private var imageOptions : [String : AnyObject]!
var faceFound = false
var image : UIImage!
override func layoutSubviews() {
previewLayer.position = CGPoint(x: self.frame.width/2, y: self.frame.height/2)
previewLayer.bounds = self.frame
self.layer.borderWidth = 2.0
self.layer.borderColor = UIColor.redColor().CGColor
}
func loadCamera() {
let camera = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in camera {
if device.position == .Front {
do{
for input in captureSession.inputs {
captureSession.removeInput(input as! AVCaptureInput)
}
for output in captureSession.outputs {
captureSession.removeOutput(output as! AVCaptureOutput)
}
previewLayer.removeFromSuperlayer()
previewLayer.session = nil
let input = try AVCaptureDeviceInput(device: device as! AVCaptureDevice)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
sessionOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA)]
sessionOutput.setSampleBufferDelegate(self, queue: dispatch_get_global_queue(Int(QOS_CLASS_BACKGROUND.rawValue), 0))
sessionOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
switch UIDevice.currentDevice().orientation.rawValue {
case 1:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.Portrait
break
case 2:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.PortraitUpsideDown
break
case 3:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
break
case 4:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
break
default:
break
}
self.layer.addSublayer(previewLayer)
}
}
} catch {
print("Error")
}
}
}
}
func takePicture() -> UIImage {
self.previewLayer.removeFromSuperlayer()
self.captureSession.stopRunning()
return image
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
ciImage = CIImage(CVPixelBuffer: pixelBuffer!, options: attachments as? [String : AnyObject])
if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
imageOptions = [CIDetectorImageOrientation : 8]
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
imageOptions = [CIDetectorImageOrientation : 3]
} else if UIDevice.currentDevice().orientation == .LandscapeRight {
imageOptions = [CIDetectorImageOrientation : 1]
} else {
imageOptions = [CIDetectorImageOrientation : 6]
}
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: ciContext, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
let features = faceDetector.featuresInImage(ciImage, options: imageOptions)
if features.count == 0 {
if faceFound == true {
faceFound = false
dispatch_async(dispatch_get_main_queue()) {
self.layer.borderColor = UIColor.redColor().CGColor
}
}
} else {
if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Left)
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Down)
} else if UIDevice.currentDevice().orientation == .LandscapeRight {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Up)
} else {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Right)
}
if faceFound == false {
faceFound = true
for feature in features {
if feature.isKindOfClass(CIFaceFeature) {
dispatch_async(dispatch_get_main_queue()) {
self.layer.borderColor = UIColor.greenColor().CGColor
}
}
}
}
}
}
我测试了一个理论并且它有效。由于 ciContext 是通过视图初始化进行初始化的,因此该应用程序似乎由于竞争条件而崩溃。我将 ciContext 的初始化移动到我的 loadCamera 方法中,此后它没有崩溃。
更新
我注意到的另一件事是,在互联网上的各种教程和博客文章中,语句 let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
是在两个单独的语句中声明的,因此它变成了
let eaglContext = EAGLContext(API: .OpenGLES2)
let ciContext = CIContext(EAGLContext: eaglContext)
我仍然不知道究竟是什么导致了应用程序崩溃,但这两个更改似乎解决了问题
正确答案
终于找到罪魁祸首了。在我使用 ciContext 的 viewController 中,我有一个未失效的计时器,因此保持对 viewController 的强引用。在随后的每次访问中,它都会创建一个新的 viewController 而前一个从未从内存中释放。这导致内存超时。一旦超过某个阈值,ciContext 初始化程序将 return 为零,因为内存不足会导致应用程序崩溃。
背景: 我是 运行 具有以下两个选项的 swift 2 应用程序。
选项A: 用户可以输入数字进行登录。在本例中,his/her 图片显示在 UIImageView 中。
选项 B: 用户可以使用 NFC 标签登录。在这种情况下,UIImageView 被替换为显示实时摄像头流的摄像头层,并使用 CIContext 在按下按钮时捕获图像。
问题: 我面临的问题是,有时,当我选择选项 A(不使用相机层)时,应用程序会崩溃。由于我无法确定地重现崩溃,因此我无法理解应用崩溃的原因。
编辑:相机层在两个选项中都使用,但隐藏在选项 A 中。
Crashlytics 生成以下崩溃日志:
0 libswiftCore.dylib specialized _fatalErrorMessage(StaticString, StaticString, StaticString, UInt) -> () + 44
1 CameraLayerView.swift line 20 CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
2 CameraLayerView.swift line 0 @objc CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
3 UIKit -[UIClassSwapper initWithCoder:] + 248
32 UIKit UIApplicationMain + 208
33 AppDelegate.swift line 17 main
34 libdispatch.dylib (Missing)
我检查了 CameraLayerView 中的第 20 行,但它只是一个初始化语句
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
下面提到的是CameraLayerView文件。任何帮助将不胜感激
var captureSession = AVCaptureSession()
var sessionOutput = AVCaptureVideoDataOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
private var pixelBuffer : CVImageBuffer!
private var attachments : CFDictionary!
private var ciImage : CIImage!
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
private var imageOptions : [String : AnyObject]!
var faceFound = false
var image : UIImage!
override func layoutSubviews() {
previewLayer.position = CGPoint(x: self.frame.width/2, y: self.frame.height/2)
previewLayer.bounds = self.frame
self.layer.borderWidth = 2.0
self.layer.borderColor = UIColor.redColor().CGColor
}
func loadCamera() {
let camera = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in camera {
if device.position == .Front {
do{
for input in captureSession.inputs {
captureSession.removeInput(input as! AVCaptureInput)
}
for output in captureSession.outputs {
captureSession.removeOutput(output as! AVCaptureOutput)
}
previewLayer.removeFromSuperlayer()
previewLayer.session = nil
let input = try AVCaptureDeviceInput(device: device as! AVCaptureDevice)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
sessionOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA)]
sessionOutput.setSampleBufferDelegate(self, queue: dispatch_get_global_queue(Int(QOS_CLASS_BACKGROUND.rawValue), 0))
sessionOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
switch UIDevice.currentDevice().orientation.rawValue {
case 1:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.Portrait
break
case 2:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.PortraitUpsideDown
break
case 3:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
break
case 4:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
break
default:
break
}
self.layer.addSublayer(previewLayer)
}
}
} catch {
print("Error")
}
}
}
}
func takePicture() -> UIImage {
self.previewLayer.removeFromSuperlayer()
self.captureSession.stopRunning()
return image
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
ciImage = CIImage(CVPixelBuffer: pixelBuffer!, options: attachments as? [String : AnyObject])
if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
imageOptions = [CIDetectorImageOrientation : 8]
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
imageOptions = [CIDetectorImageOrientation : 3]
} else if UIDevice.currentDevice().orientation == .LandscapeRight {
imageOptions = [CIDetectorImageOrientation : 1]
} else {
imageOptions = [CIDetectorImageOrientation : 6]
}
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: ciContext, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
let features = faceDetector.featuresInImage(ciImage, options: imageOptions)
if features.count == 0 {
if faceFound == true {
faceFound = false
dispatch_async(dispatch_get_main_queue()) {
self.layer.borderColor = UIColor.redColor().CGColor
}
}
} else {
if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Left)
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Down)
} else if UIDevice.currentDevice().orientation == .LandscapeRight {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Up)
} else {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Right)
}
if faceFound == false {
faceFound = true
for feature in features {
if feature.isKindOfClass(CIFaceFeature) {
dispatch_async(dispatch_get_main_queue()) {
self.layer.borderColor = UIColor.greenColor().CGColor
}
}
}
}
}
}
我测试了一个理论并且它有效。由于 ciContext 是通过视图初始化进行初始化的,因此该应用程序似乎由于竞争条件而崩溃。我将 ciContext 的初始化移动到我的 loadCamera 方法中,此后它没有崩溃。
更新
我注意到的另一件事是,在互联网上的各种教程和博客文章中,语句 let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
是在两个单独的语句中声明的,因此它变成了
let eaglContext = EAGLContext(API: .OpenGLES2)
let ciContext = CIContext(EAGLContext: eaglContext)
我仍然不知道究竟是什么导致了应用程序崩溃,但这两个更改似乎解决了问题
正确答案
终于找到罪魁祸首了。在我使用 ciContext 的 viewController 中,我有一个未失效的计时器,因此保持对 viewController 的强引用。在随后的每次访问中,它都会创建一个新的 viewController 而前一个从未从内存中释放。这导致内存超时。一旦超过某个阈值,ciContext 初始化程序将 return 为零,因为内存不足会导致应用程序崩溃。