使用虚拟对象捕获 ARSCNView - iOS

Capturing ARSCNView with virtual objects - iOS

我有一个绘制了虚拟对象的 ARSCNView。虚拟对象绘制在用户的脸上。 会话具有以下配置:

let configuration = ARFaceTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading

sceneView.session.run(configuration)

此 ARSCNView 是视频通话的一部分。如果我们像下面这样发回像素缓冲区,

 public func session(_ session: ARSession, didUpdate frame: ARFrame) {
    videoSource.sendBuffer(frame.capturedImage, timestamp: frame.timestamp)
 }

虚拟对象没有显示给我的来电者。

我尝试过的一件事是,不依赖 ARSessionDelegate 的回调,而是使用 DispatchSourceTimer 发送事件。

func startCaptureView() {  
  // Timer with 0.1 second interval
  timer.schedule(deadline: .now(), repeating: .milliseconds(100))
  timer.setEventHandler { [weak self] in
    // Turn sceneView data into UIImage
    guard let sceneImage: CGImage = self?.sceneView.snapshot().cgImage else {
      return
    }

    self?.videoSourceQueue.async { [weak self] in
       if let buffer: CVPixelBuffer = ImageProcessor.pixelBuffer(forImage: sceneImage) {
             self?.videoSource.sendBuffer(buffer, timestamp: Double(mach_absolute_time()))
        }
    }
  }

  timer.resume()
}

来电接收数据慢,视频卡顿,图片大小不合适

关于如何将有关虚拟对象的数据与捕获的帧一起发送,有什么建议吗?

参考:https://medium.com/agora-io/augmented-reality-video-conference-6845c001aec0

虚拟对象没有出现的原因是因为 ARKit 只提供原始图像,所以 frame.capturedImage 是相机拍摄的图像,没有任何 SceneKit 渲染。要传递渲染的视频,您需要实现离屏 SCNRenderer 并将像素缓冲区传递给 Agora 的 SDK。

我建议您查看开源框架 AgoraARKit. I wrote the framework and it implements Agora.io Video SDK and ARVideoKit as dependancies. ARVideoKit 是一个流行的库,它实现了离屏渲染器并提供渲染像素缓冲区。

库默认实现 WorldTracking。如果您想扩展 ARBroadcaster class 以实现面部跟踪,您可以使用此代码:

import ARKit

class FaceBroadcaster : ARBroadcaster {

    // placements dictionary
    var faceNodes: [UUID:SCNNode] = [:]           // Dictionary of faces

    override func viewDidLoad() {
        super.viewDidLoad() 
    }

    override func setARConfiguration() {
        print("setARConfiguration")        // Configure ARKit Session
        let configuration = ARFaceTrackingConfiguration()
        configuration.isLightEstimationEnabled = true
        // run the config to start the ARSession
        self.sceneView.session.run(configuration)
        self.arvkRenderer?.prepare(configuration)
    }

    // anchor detection
    override func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        super.renderer(renderer, didAdd: node, for: anchor)
        guard let sceneView = renderer as? ARSCNView, anchor is ARFaceAnchor else { return }
        /*
         Write depth but not color and render before other objects.
         This causes the geometry to occlude other SceneKit content
         while showing the camera view beneath, creating the illusion
         that real-world faces are obscuring virtual 3D objects.
         */
        let faceGeometry = ARSCNFaceGeometry(device: sceneView.device!)!
        faceGeometry.firstMaterial!.colorBufferWriteMask = []
        let occlusionNode = SCNNode(geometry: faceGeometry)
        occlusionNode.renderingOrder = -1

        let contentNode = SCNNode()
        contentNode.addChildNode(occlusionNode)
        node.addChildNode(contentNode)
        faceNodes[anchor.identifier] = node
    }
}