对象检测:输出不同于 CreateML 与编程方式
ObjectDetection: Output different from CreateML vs programatically
我想从图像中提取已知对象。我使用 CreateML App 创建了一个 ObjectDetector
模型。当我使用 CreateML 预览进行测试时,检测工作正常,但是通过代码,似乎出了点问题。
下面是我写的示例代码部分。我正在使用 boundingbox
保存图片,但是,当我使用 CreateML 预览进行测试时,预测的图像完全不同。我已经尝试了所有选项,请让我知道我的代码有什么问题。
func extractSpecifcSectioninImage(image: NSImage){
var requests = [VNRequest]()
var picCount = 1
let modelURL = Bundle.main.url(forResource: "ObjectDetection", withExtension: "mlmodelc")!
do {
let visionModel = try VNCoreMLModel(for: MLModel(contentsOf: modelURL))
let objectRecognition = VNCoreMLRequest(model: visionModel, completionHandler: { (request, error) in
if let results = request.results {
for observation in results where observation is VNRecognizedObjectObservation {
guard let objectObservation = observation as? VNRecognizedObjectObservation else {
continue
}
let cropsize = VNImageRectForNormalizedRect(objectObservation.boundingBox, Int((image.size.width)), Int((image.size.height)))
let topLabelObservation = objectObservation.labels[0]
guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else{break}
guard let cutImageRef: CGImage = cgImage.cropping(to:cropsize)else {break}
let sie = NSSize(width: cropsize.width,height: cropsize.height)
let objectImg = NSImage(cgImage: cutImageRef, size: sie)
if objectImg.save(as: "CroppedImage\(picCount)") {
picCount += 1
}
}
}
})
objectRecognition.imageCropAndScaleOption = .scaleFill
guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else{
print("Failed to get cgimage from input image")
return
}
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
do {
try handler.perform([objectRecognition])
} catch {
print(error)
}
requests = [objectRecognition]
} catch let error as NSError {
print("Model loading went wrong: \(error)")
}
}
你没有说边界框有什么问题,但我的猜测是它们是正确的,只是没有被绘制在正确的位置。我写了一篇关于此的博客 post:https://machinethink.net/blog/bounding-boxes/
我想从图像中提取已知对象。我使用 CreateML App 创建了一个 ObjectDetector
模型。当我使用 CreateML 预览进行测试时,检测工作正常,但是通过代码,似乎出了点问题。
下面是我写的示例代码部分。我正在使用 boundingbox
保存图片,但是,当我使用 CreateML 预览进行测试时,预测的图像完全不同。我已经尝试了所有选项,请让我知道我的代码有什么问题。
func extractSpecifcSectioninImage(image: NSImage){
var requests = [VNRequest]()
var picCount = 1
let modelURL = Bundle.main.url(forResource: "ObjectDetection", withExtension: "mlmodelc")!
do {
let visionModel = try VNCoreMLModel(for: MLModel(contentsOf: modelURL))
let objectRecognition = VNCoreMLRequest(model: visionModel, completionHandler: { (request, error) in
if let results = request.results {
for observation in results where observation is VNRecognizedObjectObservation {
guard let objectObservation = observation as? VNRecognizedObjectObservation else {
continue
}
let cropsize = VNImageRectForNormalizedRect(objectObservation.boundingBox, Int((image.size.width)), Int((image.size.height)))
let topLabelObservation = objectObservation.labels[0]
guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else{break}
guard let cutImageRef: CGImage = cgImage.cropping(to:cropsize)else {break}
let sie = NSSize(width: cropsize.width,height: cropsize.height)
let objectImg = NSImage(cgImage: cutImageRef, size: sie)
if objectImg.save(as: "CroppedImage\(picCount)") {
picCount += 1
}
}
}
})
objectRecognition.imageCropAndScaleOption = .scaleFill
guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else{
print("Failed to get cgimage from input image")
return
}
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
do {
try handler.perform([objectRecognition])
} catch {
print(error)
}
requests = [objectRecognition]
} catch let error as NSError {
print("Model loading went wrong: \(error)")
}
}
你没有说边界框有什么问题,但我的猜测是它们是正确的,只是没有被绘制在正确的位置。我写了一篇关于此的博客 post:https://machinethink.net/blog/bounding-boxes/