改进 body VNDetectHumanBodyPoseRequest 的跟踪性能

Improve body tracking performance of VNDetectHumanBodyPoseRequest

我正在尝试通过 body 跟踪 VNDetectHumanBodyPoseRequest 提高绘制骨架的性能,即使距离超过 5 米,并且使用稳定的 iPhone XS 相机。

我的 body 右下肢的跟踪置信度低,明显滞后并且有抖动。我无法复制今年 WWDC 中展示的性能 demo video

这里是相关代码,改编自Apple's sample code:

class Predictor {
  func extractPoses(_ sampleBuffer: CMSampleBuffer) throws -> [VNRecognizedPointsObservation] {
    let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: .down)
    
    let request = VNDetectHumanBodyPoseRequest()
    
    do {
      // Perform the body pose-detection request.
      try requestHandler.perform([request])
    } catch {
      print("Unable to perform the request: \(error).\n")
    }
    
    return (request.results as? [VNRecognizedPointsObservation]) ?? [VNRecognizedPointsObservation]()
  }
}

我已经捕获了视频数据并在此处处理样本缓冲区:

class CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {

  func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    let observations = try? predictor.extractPoses(sampleBuffer)
    observations?.forEach { processObservation([=12=]) }
  }

  func processObservation(_ observation: VNRecognizedPointsObservation) {
    
    // Retrieve all torso points.
    guard let recognizedPoints =
            try? observation.recognizedPoints(forGroupKey: .all) else {
      return
    }
    
    let storedPoints = Dictionary(uniqueKeysWithValues: recognizedPoints.compactMap { (key, point) -> (String, CGPoint)? in
      return (key.rawValue, point.location)
    })
    
    DispatchQueue.main.sync {
      let mappedPoints = Dictionary(uniqueKeysWithValues: recognizedPoints.compactMap { (key, point) -> (String, CGPoint)? in
        guard point.confidence > 0.1 else { return nil }
        let norm = VNImagePointForNormalizedPoint(point.location,
                                                  Int(drawingView.bounds.width),
                                                  Int(drawingView.bounds.height))
        return (key.rawValue, norm)
      })
      
      let time = 1000 * observation.timeRange.start.seconds
      
      
      // Draw the points onscreen.
      DispatchQueue.main.async {
        self.drawingView.draw(points: mappedPoints)
      }
    }
  }
}

drawingView.draw 函数用于相机视图顶部的自定义 UIView,并使用 CALayer 子图层绘制点。 AVCaptureSession代码与示例代码here.

完全一样

我尝试使用 VNDetectHumanBodyPoseRequest(completionHandler:) 变体,但这对我的性能没有影响。不过,我可以尝试使用移动平均滤波器进行平滑处理。但是离群值预测仍然存在问题,非常不准确。

我错过了什么?

我认为这是 iOS 14 beta v1-v3 上的错误。升级到 v4 和更高版本的 beta 版本后,跟踪要好得多。 API 也随着 fine-grained 带有最新测试版更新的类型名称变得更加清晰。

请注意,我没有从 Apple 那里得到关于这个错误的官方答复,但这个问题可能会在官方 iOS 14 版本中完全消失。