如何使用不接受实时输入的 CoreML 在 Swift 中创建一个简单的相机应用程序?

How Do I create a Simple Camera App in Swift Using CoreML that does not take Live Input?

我一直在尝试在 xcode 和 swift 中创建一个允许用户拍照的简单相机图像识别应用程序。然后将照片输入到已经训练好的 coreML 模型中,并将具有预测精度的输出输出到标签。

找了很多网站,能找到的只有

之类的教程

https://medium.freecodecamp.org/ios-coreml-vision-image-recognition-3619cf319d0b

允许实时识别图像。我不希望它是实时的,而只是允许某人拍照。我想知道如何以非实时输入的方式转换此代码:

  import UIKit
  import AVFoundation
  import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let label: UILabel = {
    let label = UILabel()
    label.textColor = .white
    label.translatesAutoresizingMaskIntoConstraints = false
    label.text = "Label"
    label.font = label.font.withSize(30)
    return label
}()
override func viewDidLoad() {

    super.viewDidLoad()

    // establish the capture session and add the label
    setupCaptureSession()
    view.addSubview(label)
    setupLabel()
    // Do any additional setup after loading the view, typically from a nib.
}
func setupCaptureSession() {
    // create a new capture session
    let captureSession = AVCaptureSession()

    // find the available cameras
    let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

    do {
        // select a camera
        if let captureDevice = availableDevices.first {
            captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
        }
    } catch {
        // print an error if the camera is not available
        print(error.localizedDescription)
    }

    // setup the video output to the screen and add output to our capture session
    let captureOutput = AVCaptureVideoDataOutput()
    captureSession.addOutput(captureOutput)
    let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    previewLayer.frame = view.frame
    view.layer.addSublayer(previewLayer)

    // buffer the video and start the capture session
    captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
    captureSession.startRunning()
}

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    // load our CoreML Pokedex model
    guard let model = try? VNCoreMLModel(for: aslModel().model) else { return }

    // run an inference with CoreML
    let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in

        // grab the inference results
        guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }

        // grab the highest confidence result
        guard let Observation = results.first else { return }

        // create the label text components
        let predclass = "\(Observation.identifier)"
        let predconfidence = String(format: "%.02f%", Observation.confidence * 100)

        // set the label text
        DispatchQueue.main.async(execute: {
            self.label.text = "\(predclass) \(predconfidence)"
        })
    }


    // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
    // Applications generating frames, compressing or decompressing video, or using Core Image
    // can all make use of Core Video pixel buffers
    guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

    // execute the request
    try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
func setupLabel() {
    // constrain the label in the center
    label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

    // constrain the the label to 50 pixels from the bottom
    label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
}

override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()
    // Dispose of any resources that can be recreated.
}

}

现在就像在接受实时图像输入之前所说的那样。

我在 Medium 上写了一篇关于此的 post,但它是用葡萄牙语写的。看看 Medium 的自动翻译是否能让你理解 post.

Swift + Core ML

希望对您有所帮助