检查 UIImage 的子部分是浅色还是深色

Check if subsection of UIImage is light or dark

我正在尝试覆盖一个 V 形按钮,以允许用户关闭当前视图。人字形的颜色在深色图像上应为浅色,在浅色图像上应为深色。我附上了我所描述内容的屏幕截图。

但是,在尝试计算图像的 lightness/darkness 时会对性能产生重大影响,我正在这样做(在 `CGImage 上操作):

var isDark: Bool {
    guard let imageData = dataProvider?.data else { return false }
    guard let ptr = CFDataGetBytePtr(imageData) else { return false }
    let length = CFDataGetLength(imageData)
    let threshold = Int(Double(width * height) * 0.45)
    var darkPixels = 0
    for i in stride(from: 0, to: length, by: 4) {
        let r = ptr[i]
        let g = ptr[i + 1]
        let b = ptr[i + 2]
        let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
        if luminance < 150 {
            darkPixels += 1
            if darkPixels > threshold {
                return true
            }
        }
    }
    return false
}

此外,例如,当人字形下方的特定区域较暗但图像的其余部分较亮时效果不佳。

我只想计算图像的一小部分,因为人字形非常小。我尝试使用 CGImage 的 cropping(to rect: CGRect) 裁剪图像,但挑战在于图像被设置为纵横比填充,这意味着 UIImageView 框架的顶部不是 UIImage 的顶部(例如,图像可能被放大并居中)。在通过宽高比填充调整图像后,有没有一种方法可以隔离出现在人字形框架下方的图像部分?

编辑

多亏了接受的答案中的第一个 link,我才能做到这一点。我创建了一系列扩展,我 认为 应该适用于我以外的情况。

extension UIImage {
    var isDark: Bool {
        return cgImage?.isDark ?? false
    }
}

extension CGImage {
    var isDark: Bool {
        guard let imageData = dataProvider?.data else { return false }
        guard let ptr = CFDataGetBytePtr(imageData) else { return false }
        let length = CFDataGetLength(imageData)
        let threshold = Int(Double(width * height) * 0.45)
        var darkPixels = 0
        for i in stride(from: 0, to: length, by: 4) {
            let r = ptr[i]
            let g = ptr[i + 1]
            let b = ptr[i + 2]
            let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
            if luminance < 150 {
                darkPixels += 1
                if darkPixels > threshold {
                    return true
                }
            }
        }
        return false
    }

    func cropping(to rect: CGRect, scale: CGFloat) -> CGImage? {
        let scaledRect = CGRect(x: rect.minX * scale, y: rect.minY * scale, width: rect.width * scale, height: rect.height * scale)
        return self.cropping(to: scaledRect)
    }
}

extension UIImageView {
    func hasDarkImage(at subsection: CGRect) -> Bool {
        guard let image = image, let aspectSize = aspectFillSize() else { return false }
        let scale = image.size.width / frame.size.width
        let cropRect = CGRect(x: (aspectSize.width - frame.width) / 2,
                              y: (aspectSize.height - frame.height) / 2,
                              width: aspectSize.width,
                              height: frame.height)
        let croppedImage = image.cgImage?
            .cropping(to: cropRect, scale: scale)?
            .cropping(to: subsection, scale: scale)
        return croppedImage?.isDark ?? false
    }

    private func aspectFillSize() -> CGSize? {
        guard let image = image else { return nil }
        var aspectFillSize = CGSize(width: frame.width, height: frame.height)
        let widthScale = frame.width / image.size.width
        let heightScale = frame.height / image.size.height
        if heightScale > widthScale {
            aspectFillSize.width = heightScale * image.size.width
        }
        else if widthScale > heightScale {
            aspectFillSize.height = widthScale * image.size.height
        }
        return aspectFillSize
    }
}

此处有几个选项可用于在适合视图后查找图像的大小:How to know the image size after applying aspect fit for the image in an UIImageView

一旦你明白了,你就可以找出人字形的位置(你可能需要先转换它的框架https://developer.apple.com/documentation/uikit/uiview/1622498-convert

如果性能仍然不足,我会考虑使用 CoreImage 来执行计算:https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage

有几种使用 CoreImage 的方法,但获取平均值是最简单的。