Sou*_*jit 1 crop objective-c core-image face-detection
我需要从给定图像中裁剪出一张/多个面孔,并将裁剪后的脸部图像用于其他用途。我正在使用 CoreImage 中的 CIDetectorTypeFace。问题是仅包含检测到的脸部的新 UIImage 需要更大的尺寸,因为头发被剪掉或下颌被剪掉。initWithFrame:faceFeature.bounds我如何增加??的大小?我正在使用的示例代码:
CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[staticBG addSubview:faceView];
// cropping the face
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds);
[resultView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
}
Run Code Online (Sandbox Code Playgroud)
注意:我用来显示检测到的面部区域的红框与裁剪后的图像根本不匹配。也许我没有正确显示框架,但由于我不需要显示框架,我真的需要裁剪掉的脸,我不太担心它。
不确定,但你可以尝试
CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis);
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle);
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
2376 次 |
| 最近记录: |