在 swift 中使用创建 ML 对象检测模型

Use Create ML object detection model in swift

您好,我在 create ML 中创建了一个对象检测模型并将其导入到我的 swift 项目中,但我不知道如何使用它。基本上我只是想给模型一个输入然后接收一个输出。我打开了 Ml 模型预测选项卡并找到了输入和输出变量,但我不知道如何通过代码实现它。我在互联网上搜索了答案并找到了 运行 ml 模型的多个代码片段,但我无法让它们工作。

这是机器学习模型: ML Model predictions

这是我试过的代码:

let model = TestObjectModel()

guard let modelOutput = try? model.prediction(imagePath: "images_(2)" as! CVPixelBuffer, iouThreshold: 0.5, confidenceThreshold: 0.5) else {
    fatalError("Unexpected runtime error.")
}

print(modelOutput)

当 运行 代码出现此错误时:

error: Execution was interrupted, reason: EXC_BREAKPOINT (code=1, subcode=0x106c345c0).
The process has been left at the point where it was interrupted, use "thread return -x" to return to the state before expression evaluation.

好的,首先你必须决定你有哪种类型的输入declared.You可以看到它,当你在项目导航器中点击你的模型时。

例如:

let mlArray = try? MLMultiArray(shape: [1024],dataType: MLMultiArrayDataType.float32)

mlArray![index] = x --> 给你的数组一些数据

let input = TestObjectModel(input: mlArray!)
       do {

                  let options = MLPredictionOptions.init()
                  options.usesCPUOnly = true
                  let prediction = try? self. TestObjectModel.prediction(input: input, options: options)

--> 现在你可以使用预测,这是你的输出

                   } catch let err {
                       fatalError(err.localizedDescription) // Error computing NN outputs error
                   }

图像作为模型输入的另一个示例:

do {
    if let resizedImage = resize(image: image, newSize: CGSize(width: 416, height: 416)), let pixelBuffer = resizedImage.toCVPixelBuffer() {
        let prediction = try model.prediction(image: pixelBuffer)
        let value = prediction.output[0].intValue
        print(value)
    }
} catch {
    print("Error while doing predictions: \(error)")
}


func resize(image: UIImage, newSize: CGSize) -> UIImage? {
    UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
    image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
    let newImage = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()
    return newImage
}
extension UIImage {
    func toCVPixelBuffer() -> CVPixelBuffer? {
        let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
        var pixelBuffer : CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(self.size.width), Int(self.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
        guard (status == kCVReturnSuccess) else {
            return nil
        }

        CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: pixelData, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

        context?.translateBy(x: 0, y: self.size.height)
        context?.scaleBy(x: 1.0, y: -1.0)

        UIGraphicsPushContext(context!)
        self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
        UIGraphicsPopContext()
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

        return pixelBuffer
    }
}