No *_*ing 6 core-image ios cifilter
我正在CIPixellate用过滤器做一些测试,我让它工作,但生成的图像大小不同。我想这是有道理的,因为我改变了输入比例,但这不是我所期望的——我认为它会在图像的矩形内进行缩放。
我是否误解/使用了错误的过滤器,或者我是否只需要将输出图像裁剪为我想要的大小。
此外,inputCenter通过阅读标题/反复试验,我不清楚参数。谁能解释一下那个参数是关于什么的?
NSMutableArray * tmpImages = [[NSMutableArray alloc] init];
for (int i = 0; i < 10; i++) {
double scale = i * 4.0;
UIImage* tmpImg = [self applyCIPixelateFilter:self.faceImage withScale:scale];
printf("tmpImg width: %f height: %f\n", tmpImg.size.width, tmpImg.size.height);
[tmpImages addObject:tmpImg];
}
Run Code Online (Sandbox Code Playgroud)
tmpImg width: 480.000000 height: 640.000000
tmpImg width: 484.000000 height: 644.000000
tmpImg width: 488.000000 height: 648.000000
tmpImg width: 492.000000 height: 652.000000
tmpImg width: 496.000000 height: 656.000000
tmpImg width: 500.000000 height: 660.000000
tmpImg width: 504.000000 height: 664.000000
tmpImg width: 508.000000 height: 668.000000
tmpImg width: 512.000000 height: 672.000000
tmpImg width: 516.000000 height: 676.000000
Run Code Online (Sandbox Code Playgroud)
- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
/*
Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
Parameters
inputImage: A CIImage object whose display name is Image.
inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
Default value: [150 150]
inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
Default value: 8.00
*/
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:@"CIPixellate"];
CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
[filter setDefaults];
[filter setValue:vector forKey:@"inputCenter"];
[filter setValue:[NSNumber numberWithDouble:scale] forKey:@"inputScale"];
[filter setValue:inputImage forKey:@"inputImage"];
CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
Run Code Online (Sandbox Code Playgroud)
有时 inputScale 不会均匀地划分您的图像,这是当我发现我得到不同大小的输出图像时。
例如,如果 inputScale = 0 或 1,则输出图像大小完全准确。
我发现图像周围的额外空间居中的方式因inputCenter而“不透明”地变化。即,我还没有花时间弄清楚到底如何(我通过视图中的点击位置设置它)。
我对不同尺寸的解决方案是将图像重新渲染到输入图像尺寸的范围内,我在 Apple Watch 上使用黑色背景。
CIFilter *pixelateFilter = [CIFilter filterWithName:@"CIPixellate"];
[pixelateFilter setDefaults];
[pixelateFilter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[pixelateFilter setValue:@(amount) forKey:@"inputScale"];
[pixelateFilter setValue:vector forKey:@"inputCenter"];
CIImage* result = [pixelateFilter valueForKey:kCIOutputImageKey];
CIContext *context = [CIContext contextWithOptions:nil];
CGRect extent = [pixelateResult extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
UIGraphicsBeginImageContextWithOptions(editImage.size, YES, [editImage scale]);
CGContextRef ref = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ref, 0, editImage.size.height);
CGContextScaleCTM(ref, 1.0, -1.0);
CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
returnImage = filledImage;
CGImageRelease(cgImage);
Run Code Online (Sandbox Code Playgroud)
如果您打算坚持实施,我建议至少更改提取 UIImage 的方式以使用原始图像的“比例”,不要与 CIFilter 比例混淆。
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:fromImage.scale orientation:fromImage.imageOrientation];
Run Code Online (Sandbox Code Playgroud)