Ave*_*ine 5 uiimage cgimage ios swift
我正在尝试覆盖一个 V 形按钮,该按钮将允许用户关闭当前视图。V 形的颜色在深色图像上应为浅色,在浅色图像上应为深色。我附上了我所描述内容的屏幕截图。
然而,当尝试计算图像的亮度/暗度时,会对性能产生重大影响,我这样做(在`CGImage上操作):
var isDark: Bool {
guard let imageData = dataProvider?.data else { return false }
guard let ptr = CFDataGetBytePtr(imageData) else { return false }
let length = CFDataGetLength(imageData)
let threshold = Int(Double(width * height) * 0.45)
var darkPixels = 0
for i in stride(from: 0, to: length, by: 4) {
let r = ptr[i]
let g = ptr[i + 1]
let b = ptr[i + 2]
let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
if luminance < 150 {
darkPixels += 1
if darkPixels > threshold {
return true
}
}
}
return false
}
Run Code Online (Sandbox Code Playgroud)
此外,例如,当 V 形下方的特定区域较暗但图像的其余部分较亮时,它的效果就不佳。
我只想计算图像的一小部分,因为 V 形非常小。我尝试使用 CGImage 裁剪图像cropping(to rect: CGRect),但挑战是图像设置为宽高比填充,这意味着 UIImageView 框架的顶部不是 UIImage 的顶部(例如图像可能会放大并居中)。在通过纵横比填充调整图像后,有没有一种方法可以仅隔离出现在 V 形框架下方的图像部分?
编辑
感谢已接受答案中的第一个链接,我能够实现这一点。我创建了一系列扩展,我认为它们应该适用于我以外的情况。
extension UIImage {
var isDark: Bool {
return cgImage?.isDark ?? false
}
}
extension CGImage {
var isDark: Bool {
guard let imageData = dataProvider?.data else { return false }
guard let ptr = CFDataGetBytePtr(imageData) else { return false }
let length = CFDataGetLength(imageData)
let threshold = Int(Double(width * height) * 0.45)
var darkPixels = 0
for i in stride(from: 0, to: length, by: 4) {
let r = ptr[i]
let g = ptr[i + 1]
let b = ptr[i + 2]
let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
if luminance < 150 {
darkPixels += 1
if darkPixels > threshold {
return true
}
}
}
return false
}
func cropping(to rect: CGRect, scale: CGFloat) -> CGImage? {
let scaledRect = CGRect(x: rect.minX * scale, y: rect.minY * scale, width: rect.width * scale, height: rect.height * scale)
return self.cropping(to: scaledRect)
}
}
extension UIImageView {
func hasDarkImage(at subsection: CGRect) -> Bool {
guard let image = image, let aspectSize = aspectFillSize() else { return false }
let scale = image.size.width / frame.size.width
let cropRect = CGRect(x: (aspectSize.width - frame.width) / 2,
y: (aspectSize.height - frame.height) / 2,
width: aspectSize.width,
height: frame.height)
let croppedImage = image.cgImage?
.cropping(to: cropRect, scale: scale)?
.cropping(to: subsection, scale: scale)
return croppedImage?.isDark ?? false
}
private func aspectFillSize() -> CGSize? {
guard let image = image else { return nil }
var aspectFillSize = CGSize(width: frame.width, height: frame.height)
let widthScale = frame.width / image.size.width
let heightScale = frame.height / image.size.height
if heightScale > widthScale {
aspectFillSize.width = heightScale * image.size.width
}
else if widthScale > heightScale {
aspectFillSize.height = widthScale * image.size.height
}
return aspectFillSize
}
}
Run Code Online (Sandbox Code Playgroud)
这里有几个选项可用于在图像适合视图后查找图像的大小:How to Know the image size afterapplicationaspect fit for the image in an UIImageView
一旦你明白了,你就可以找出 V 形的位置(你可能需要先转换它的框架https://developer.apple.com/documentation/uikit/uiview/1622498-convert)
如果性能仍然不足,我会考虑使用 CoreImage 来执行计算:https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a -uiimage-使用-ciareaaverage
使用 CoreImage 有几种方法可以实现这一点,但获取平均值是最简单的。
| 归档时间: |
|
| 查看次数: |
1411 次 |
| 最近记录: |