在摄像头反馈 + 视觉上跟踪面部对象 API iOS11

Track face objects on camera feed + Vision API iOS11

我有这个简单的功能来检测图像中的人脸:

func detectFacesForImage(image: UIImage)
    {
        guard let ciImage = CIImage(image: image) else {
            return
        }

        let request = VNDetectFaceRectanglesRequest { [unowned self] request, error in
            guard let observations = request.results as? [VNFaceObservation] else {
                return
            }
        }
        let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
        do
        {
            try handler.perform([request])
        }
        catch
        {
            [HANDLE ERROR HERE]
        }
    }

现在我们有 observations 个包含 VNFaceObservation 个对象的列表。我正在使用以下函数将这些对象转换为 VNDetectedObjectObservation 个对象。

func convertFaceObservationsToDetectedObjects(with observations: [VNFaceObservation])
    {
        observations.forEach { observation in
            let boundingBox = observation.boundingBox
            let size = CGSize(width: boundingBox.width * self.IMG_VIEW.bounds.width,
                              height: boundingBox.height * self.IMG_VIEW.bounds.height)
            let origin = CGPoint(x: boundingBox.minX * self.IMG_VIEW.bounds.width,
                                 y: (1 - observation.boundingBox.minY) * self.IMG_VIEW.bounds.height - size.height)
            let originalRect = CGRect(origin: origin, size: size)

            var convertedRect = cameraLayer.metadataOutputRectConverted(fromLayerRect: originalRect)
            convertedRect.origin.y = 1 - convertedRect.origin.y
            let trackingObservation = VNDetectedObjectObservation(boundingBox: convertedRect)

            self.anotherListOfObservations.append((tag, trackingObservation))
        }
    }

然后我使用这个委托函数来尝试跟踪给定的 VNDetectedObjectObservation 对象:

extension MyViewController: AVCaptureVideoDataOutputSampleBufferDelegate {

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
    {
        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
                return
        }
        var listOfRequests:[VNTrackObjectRequest] = []
        for (_, observation) in self.anotherListOfObservations
        {
            let request = VNTrackObjectRequest(detectedObjectObservation: observation) { [unowned self] request, error in
                self.handle(request, error: error)
            }

            request.trackingLevel = .accurate
            listOfRequests.append(request)
        }

        do {
            try handler.perform(listOfRequests, on: pixelBuffer)
        }
        catch {
            print(error)
        }
    }
}

我的问题是:这真的可能吗?还是做错了?

迄今为止我找到的最佳解决方案。使用最新的Vision Framework实时生成人脸特征

https://github.com/Weijay/AppleFaceDetection