如何在 ios11 上使用视觉框架的对象跟踪 API?

How can i use the Object tracking API of vision framework on ios11?

// init bounding
CGRect rect = CGRectMake(0, 0, 0.3, 0.3);
VNSequenceRequestHandler* reqImages = [[VNSequenceRequestHandler alloc] init];
VNRectangleObservation* ObserveRect = [VNRectangleObservation observationWithBoundingBox:rect];
VNTrackRectangleRequest* reqRect = [[VNTrackRectangleRequest alloc] initWithRectangleObservation:ObserveRect];
NSArray<VNRequest *>* requests = [NSArray arrayWithObjects:reqRect, nil];
BOOL bsucc = [reqImages performRequests:requests onCGImage:img.CGImage error:&error];

// get tracking bounding
VNDetectRectanglesRequest* reqRectTrack = [VNDetectRectanglesRequest new];
NSArray<VNRequest *>* requestsTrack = [NSArray arrayWithObjects:reqRectTrack, nil];
[reqImages performRequests:requestsTrack onCGImage:img.CGImage error:&error];

VNRectangleObservation* Observe = [reqRectTrack.results firstObject];
CGRect boundingBox = Observe.boundingBox;

为什么 boundingBox 值不正确?

如何找到 ios11 的 vision.framework 的演示?

Vision Framework 跟踪对象,可在此 link:

找到相关演示

https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision

Blogger 在这里详细介绍了如何让演示正常工作,并有一张显示工作构建的 gif。

希望这就是您所追求的。

这是我使用 Vision 框架的简单示例:https://github.com/artemnovichkov/iOS-11-by-Examples。我猜你有不同坐标系的问题。注意rect转换:

cameraLayer.metadataOutputRectConverted(fromLayerRect: originalRect)

cameraLayer.layerRectConverted(fromMetadataOutputRect: transformedRect)