Swift,Firebase - 将 CMSampleBufferRef 与摄像头实时馈送结合使用

Swift, Firebase - Use CMSampleBufferRef with live feed of camera

我目前正在尝试从 Firebase to use text recognition 实现 MLKit。

到目前为止,我已经获得了摄像头的代码,它在 UIView 中显示了实时画面。我现在的目的是识别此实时提要中的文本,我认为借助 CMSampleBufferRef (let image = VisionImage(buffer: bufferRef) - see linked Firebase tutorial, Step 2) 可以实现。
我怎样才能创建这样的 CMSampleBufferRef 并让它保存相机 (UIView) 的实时画面?

我的相机代码:

@IBOutlet weak var cameraView: UIView!
    var session: AVCaptureSession?
    var device: AVCaptureDevice?
    var input: AVCaptureDeviceInput?
    var output: AVCaptureMetadataOutput?
    var prevLayer: AVCaptureVideoPreviewLayer?

    override func viewDidLoad() {
        super.viewDidLoad()
        prevLayer?.frame.size = cameraView.frame.size
    }

    func createSession() {
        session = AVCaptureSession()
        device = AVCaptureDevice.default(for: AVMediaType.video)

        do{
            input = try AVCaptureDeviceInput(device: device!)
        }
        catch{
            print(error)
        }

        if let input = input{
            session?.addInput(input)
        }

        prevLayer = AVCaptureVideoPreviewLayer(session: session!)
        prevLayer?.frame.size = cameraView.frame.size
        prevLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill

        prevLayer?.connection?.videoOrientation = transformOrientation(orientation: UIInterfaceOrientation(rawValue: UIApplication.shared.statusBarOrientation.rawValue)!)

        cameraView.layer.addSublayer(prevLayer!)

        session?.startRunning()
    }

    func cameraWithPosition(position: AVCaptureDevice.Position) -> AVCaptureDevice? {
        let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera, .builtInTelephotoCamera, .builtInTrueDepthCamera, .builtInWideAngleCamera, ], mediaType: .video, position: position)

        if let device = deviceDiscoverySession.devices.first {
            return device
        }
        return nil
    }

    override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        coordinator.animate(alongsideTransition: { (context) -> Void in
            self.prevLayer?.connection?.videoOrientation = self.transformOrientation(orientation: UIInterfaceOrientation(rawValue: UIApplication.shared.statusBarOrientation.rawValue)!)
            self.prevLayer?.frame.size = self.cameraView.frame.size
        }, completion: { (context) -> Void in

        })
        super.viewWillTransition(to: size, with: coordinator)
    }

    func transformOrientation(orientation: UIInterfaceOrientation) -> AVCaptureVideoOrientation {
        switch orientation {
        case .landscapeLeft:
            return .landscapeLeft
        case .landscapeRight:
            return .landscapeRight
        case .portraitUpsideDown:
            return .portraitUpsideDown
        default:
            return .portrait
        }
    }

编辑:我添加了一个符合您的语言要求的功能性 Swift 示例:

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    @IBOutlet weak var cameraView: UIView!
    var session: AVCaptureSession!
    var device: AVCaptureDevice?
    var input: AVCaptureDeviceInput?
    var videoOutput: AVCaptureVideoDataOutput!
    var output: AVCaptureMetadataOutput?
    var prevLayer: AVCaptureVideoPreviewLayer!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        session = AVCaptureSession()
        device = AVCaptureDevice.default(for: AVMediaType.video)
        
        do{
            input = try AVCaptureDeviceInput(device: device!)
        }
        catch{
            print(error)
            return
        }
        
        if let input = input {
            if session.canAddInput(input) {
                session.addInput(input)
            }
        }
        
        videoOutput = AVCaptureVideoDataOutput()
        videoOutput.videoSettings = [
            String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)
        ]
        videoOutput.alwaysDiscardsLateVideoFrames = true
        
        let queue = DispatchQueue(label: "video-frame-sampler")
        videoOutput!.setSampleBufferDelegate(self, queue: queue)
        if session.canAddOutput(videoOutput) {
            session.addOutput(videoOutput)
            
            if let connection = videoOutput.connection(with: .video) {
                connection.videoOrientation = videoOrientationFromInterfaceOrientation()
                
                if connection.isVideoStabilizationSupported {
                    connection.preferredVideoStabilizationMode = .auto
                }
            }
        }
        
        prevLayer = AVCaptureVideoPreviewLayer(session: session)
        prevLayer.frame.size = cameraView.frame.size
        prevLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
        cameraView.layer.addSublayer(prevLayer!)
        
        session.startRunning()
    }
    
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        //pass your sampleBuffer to vision API
        //I recommend not to pass every frame however, skip some frames until camera is steady and focused
        print("frame received")
    }
    
    func videoOrientationFromInterfaceOrientation() -> AVCaptureVideoOrientation {
        return AVCaptureVideoOrientation(rawValue: UIApplication.shared.statusBarOrientation.rawValue)!
    }
}

我看到您已经设置了输入和预览层,但您还需要设置视频捕获输出,以捕获 CMSampleBufferRef 帧。

为此,请按照以下步骤设置类型为 AVCaptureVideoDataOutput 的对象:

  1. 创建 AVCaptureVideoDataOutput 实例并配置

     AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput new] autorelease];
     videoOutput.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)};
     videoOutput.alwaysDiscardsLateVideoFrames = YES;
    
  2. 设置配置输出的帧捕获(样本缓冲区)委托并将其添加到会话

     dispatch_queue_t queue = dispatch_queue_create("video-frame-sampler", 0);
     [videoOutput setSampleBufferDelegate:self queue:queue];
     if ([self.session canAddOutput:videoOutput]) {
         [self.session addOutput:videoOutput];
    
         AVCaptureConnection* connection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
         connection.videoOrientation = [self videoOrientationFromDeviceOrientation];
         if (connection.supportsVideoStabilization) {
             connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
         }
     }
    
  3. 实施 captureOutput:didOutputSampleBuffer:fromConnection: 方法,您将获得所需 CMSampleBufferRef

     -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
         //pass your sampleBuffer to vision API
         //I recommend not to pass every frame however, skip some frames until camera is steady and focused
     }
    

我是一名普通的 Objective-C 开发人员,但您可以根据需要轻松地将代码转换为 Swift。

此外,这里是 videoOrientationFromDeviceOrientation 方法的代码:

-(AVCaptureVideoOrientation)videoOrientationFromDeviceOrientation {
    UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
    AVCaptureVideoOrientation result = (AVCaptureVideoOrientation)orientation;
    if ( orientation == UIDeviceOrientationLandscapeLeft )
        result = AVCaptureVideoOrientationLandscapeRight;
    else if ( orientation == UIDeviceOrientationLandscapeRight )
        result = AVCaptureVideoOrientationLandscapeLeft;
    return result;
}