VNFaceObservation BoundingBox 在纵向模式下不缩放
VNFaceObservation BoundingBox Not Scaling In Portrait Mode
供参考,这源于 愿景 API 中的一个问题。我正在努力使用 Vision 通过 VNDetectFaceRectanglesRequest
检测图像中的人脸,它在确定图像中正确的人脸数量并提供 boundingBox
每张脸。
我的问题是,由于我的 UIImageView
(包含有问题的 UIImage
)正在使用 .scaleAspectFit
内容模式,我在正确绘制边界方面遇到了巨大困难竖屏模式下的盒子(横屏模式效果很好)。
这是我的代码;
func detectFaces(image: UIImage) {
let detectFaceRequest = VNDetectFaceRectanglesRequest { (request, error) in
if let results = request.results as? [VNFaceObservation] {
for faceObservation in results {
let boundingRect = faceObservation.boundingBox
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -self.mainImageView.frame.size.height)
let translate = CGAffineTransform.identity.scaledBy(x: self.mainImageView.frame.size.width, y: self.mainImageView.frame.size.height)
let facebounds = boundingRect.applying(translate).applying(transform)
let mask = CAShapeLayer()
var maskLayer = [CAShapeLayer]()
mask.frame = facebounds
mask.backgroundColor = UIColor.yellow.cgColor
mask.cornerRadius = 10
mask.opacity = 0.3
mask.borderColor = UIColor.yellow.cgColor
mask.borderWidth = 2.0
maskLayer.append(mask)
self.mainImageView.layer.insertSublayer(mask, at: 1)
}
}
let vnImage = VNImageRequestHandler(cgImage: image.cgImage!, options: [:])
try? vnImage.perform([detectFaceRequest])
}
这是我所看到的最终结果,请注意,框在 X 位置是正确的,但在 Y 位置基本上不准确纵向时。
** 纵向放置不正确**
** 横向正确放置**
VNFaceObservation
边界框标准化为处理后的图像。来自文档。
The bounding box of detected object. The coordinates are normalized to
the dimensions of the processed image, with the origin at the image's
lower-left corner.
因此,您可以使用简单的计算为检测到的面部找到正确的 size/frame。
let boundingBox = observation.boundingBox
let size = CGSize(width: boundingBox.width * imageView.bounds.width,
height: boundingBox.height * imageView.bounds.height)
let origin = CGPoint(x: boundingBox.minX * imageView.bounds.width,
y: (1 - observation.boundingBox.minY) * imageView.bounds.height - size.height)
然后你可以像下面这样组成CAShapeLayer
矩形
layer.frame = CGRect(origin: origin, size: size)
我认为您必须向 CIImage 提供正确的方向,同时发送以处理和检测人脸。正如@Pawel Chmiel 在他的 blog post 中提到的那样:
What is important here is that we need to provide the right orientation, because face detection is really sensitive at this point, and rotated image may cause no results.
let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments as! [String : Any]?)
//leftMirrored for front camera
let ciImageWithOrientation = ciImage.applyingOrientation(Int32(UIImageOrientation.leftMirrored.rawValue))
For the front camera, we have to use leftMirrored orientation
供参考,这源于 愿景 API 中的一个问题。我正在努力使用 Vision 通过 VNDetectFaceRectanglesRequest
检测图像中的人脸,它在确定图像中正确的人脸数量并提供 boundingBox
每张脸。
我的问题是,由于我的 UIImageView
(包含有问题的 UIImage
)正在使用 .scaleAspectFit
内容模式,我在正确绘制边界方面遇到了巨大困难竖屏模式下的盒子(横屏模式效果很好)。
这是我的代码;
func detectFaces(image: UIImage) {
let detectFaceRequest = VNDetectFaceRectanglesRequest { (request, error) in
if let results = request.results as? [VNFaceObservation] {
for faceObservation in results {
let boundingRect = faceObservation.boundingBox
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -self.mainImageView.frame.size.height)
let translate = CGAffineTransform.identity.scaledBy(x: self.mainImageView.frame.size.width, y: self.mainImageView.frame.size.height)
let facebounds = boundingRect.applying(translate).applying(transform)
let mask = CAShapeLayer()
var maskLayer = [CAShapeLayer]()
mask.frame = facebounds
mask.backgroundColor = UIColor.yellow.cgColor
mask.cornerRadius = 10
mask.opacity = 0.3
mask.borderColor = UIColor.yellow.cgColor
mask.borderWidth = 2.0
maskLayer.append(mask)
self.mainImageView.layer.insertSublayer(mask, at: 1)
}
}
let vnImage = VNImageRequestHandler(cgImage: image.cgImage!, options: [:])
try? vnImage.perform([detectFaceRequest])
}
这是我所看到的最终结果,请注意,框在 X 位置是正确的,但在 Y 位置基本上不准确纵向时。
** 纵向放置不正确**
** 横向正确放置**
VNFaceObservation
边界框标准化为处理后的图像。来自文档。
The bounding box of detected object. The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.
因此,您可以使用简单的计算为检测到的面部找到正确的 size/frame。
let boundingBox = observation.boundingBox
let size = CGSize(width: boundingBox.width * imageView.bounds.width,
height: boundingBox.height * imageView.bounds.height)
let origin = CGPoint(x: boundingBox.minX * imageView.bounds.width,
y: (1 - observation.boundingBox.minY) * imageView.bounds.height - size.height)
然后你可以像下面这样组成CAShapeLayer
矩形
layer.frame = CGRect(origin: origin, size: size)
我认为您必须向 CIImage 提供正确的方向,同时发送以处理和检测人脸。正如@Pawel Chmiel 在他的 blog post 中提到的那样:
What is important here is that we need to provide the right orientation, because face detection is really sensitive at this point, and rotated image may cause no results.
let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments as! [String : Any]?)
//leftMirrored for front camera
let ciImageWithOrientation = ciImage.applyingOrientation(Int32(UIImageOrientation.leftMirrored.rawValue))
For the front camera, we have to use leftMirrored orientation