当用户对iOS上内置的Photos.app中的照片进行一些更改(裁剪,消除红眼......等)时,更改不会应用于fullResolutionImage相应的返回ALAssetRepresentation.
但是,更改将应用于thumbnail和fullScreenImage返回的ALAssetRepresentation.此外,有关应用更改的信息可以ALAssetRepresentation通过密钥在元数据字典中找到@"AdjustmentXMP".
我想将这些更改应用于fullResolutionImage自己以保持一致性.我发现,在iOS6的+ CIFilter的filterArrayFromSerializedXMP: inputImageExtent:error:可以在此XMP元数据转换成数组CIFilter的:
ALAssetRepresentation *rep;
NSString *xmpString = rep.metadata[@"AdjustmentXMP"];
NSData *xmpData = [xmpString dataUsingEncoding:NSUTF8StringEncoding];
CIImage *image = [CIImage imageWithCGImage:rep.fullResolutionImage];
NSError *error = nil;
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData
inputImageExtent:image.extent
error:&error];
if (error) {
NSLog(@"Error during CIFilter creation: %@", [error localizedDescription]);
}
CIContext *context = [CIContext contextWithOptions:nil];
for (CIFilter *filter in filterArray) {
[filter setValue:image …Run Code Online (Sandbox Code Playgroud) I have been searching for an answer to this question in a few hours now, and I just can't figure it out. I want to add a gaussian blur effect to the image when i press the button "Button". The user is the one that is adding the image.
I have created an action for the "Button" based on sources from SO and other places on the web. It will not work. What am I doing wrong? Any code would …
我需要在UIImage上应用黑白滤镜.我有一个视图,其中有一个用户拍摄的照片,但我没有任何关于转换图像颜色的想法.
- (void)viewDidLoad {
[super viewDidLoad];
self.navigationItem.title = NSLocalizedString(@"#Paint!", nil);
imageView.image = image;
}
Run Code Online (Sandbox Code Playgroud)
我怎样才能做到这一点?
我有一个从CIImage加载的UIImage:
tempImage = [UIImage imageWithCIImage:ciImage];
问题是我需要裁剪tempImage到一个特定的CGRect,我知道如何做到这一点的唯一方法是使用CGImage.问题是在iOS 6.0文档中我发现了这个:
CGImage
If the UIImage object was initialized using a CIImage object, the value of the property is NULL.
A.如何从CIImage转换为CGImage?我正在使用此代码,但我有内存泄漏(并且无法理解在哪里):
+(UIImage*)UIImageFromCIImage:(CIImage*)ciImage {
CGSize size = ciImage.extent.size;
UIGraphicsBeginImageContext(size);
CGRect rect;
rect.origin = CGPointZero;
rect.size = size;
UIImage *remImage = [UIImage imageWithCIImage:ciImage];
[remImage drawInRect:rect];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
remImage = nil;
ciImage = nil;
//
return result;
}
Run Code Online (Sandbox Code Playgroud) 设置很简单.
import UIKit
class ViewController: UIViewController {
@IBOutlet weak var bg: UIImageView!
@IBAction func blur(_ sender: Any) {
let inputImage = CIImage(cgImage: (bg.image?.cgImage)!)
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(inputImage, forKey: "inputImage")
filter?.setValue(10, forKey: "inputRadius")
let blurred = filter?.outputImage
bg.image = UIImage(ciImage: blurred!)
}
}
Run Code Online (Sandbox Code Playgroud)
单击按钮时,屏幕变为白色.无法弄清楚我做错了什么.谁知道我做错了什么?
我QR Code使用以下代码制作图像表格:
func createQRFromString(str: String) -> CIImage? {
let stringData = str.dataUsingEncoding(NSUTF8StringEncoding)
let filter = CIFilter(name: "CIQRCodeGenerator")
filter?.setValue(stringData, forKey: "inputMessage")
filter?.setValue("H", forKey: "inputCorrectionLevel")
return filter?.outputImage
}
Run Code Online (Sandbox Code Playgroud)
然后我添加到UIImageView这样:
if let img = createQRFromString(strQRData) {
let somImage = UIImage(CIImage: img, scale: 1.0, orientation: UIImageOrientation.Down)
imgviewQRcode.image = somImage
}
Run Code Online (Sandbox Code Playgroud)
现在我需要将其保存到文件JPEG或PNG文件中.但是,当我这样做时,我的应用程序崩溃:
@IBAction func btnSave(sender: AnyObject) {
// // Define the specific path, image name
let documentsDirectoryURL = try! NSFileManager().URLForDirectory(.DocumentDirectory, inDomain: .UserDomainMask, appropriateForURL: nil, create: true)
// …Run Code Online (Sandbox Code Playgroud) 我正在通过PhotoKit编辑照片,但我发现这不会保留原始照片的元数据.即使Apple应用Sepia或Chrome过滤器时提供的SamplePhotosApp也会出现这种情况.我的问题是,您如何确保保留所有原始照片元数据?
我已经发现了如何获取原始图像的元数据,并且我能够将该元数据保存到CIImage我创建的最终元数据中,但是在提交编辑时它仍然被删除.我将转换CIImage为a CGImage转换为UIImageto NSData,或者我将其写入磁盘的方式一定存在问题.
asset.requestContentEditingInputWithOptions(options) { (input: PHContentEditingInput!, _) -> Void in
//Get full image
let url = input.fullSizeImageURL
let orientation = self.input.fullSizeImageOrientation
var inputImage = CIImage(contentsOfURL: url)
inputImage = inputImage.imageByApplyingOrientation(orientation)
//do some processing on original photo here and create a CGImage...
//save the original photo's metadata to a new CIImage:
let originalMetadata = inputImage.properties()
let newImage = CIImage(CGImage: editedCGImage, options: [kCIImageProperties: originalMetadata])
println(newImage.properties()) //correctly prints all metadata!
//commit changes to disk …Run Code Online (Sandbox Code Playgroud) 我刚刚开始使用Objective-C,我正在尝试创建一个简单的应用程序,它会显示带有模糊效果的摄像机视图.我让Camera输出与AVFoundation框架一起工作.现在,我正在尝试连接核心图像框架,但不知道如何,Apple文档让我感到困惑,在线搜索指南和教程导致没有结果.在此先感谢您的帮助.
#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
@interface ViewController ()
@property (strong ,nonatomic) CIContext *context;
@end
@implementation ViewController
AVCaptureSession *session;
AVCaptureStillImageOutput *stillImageOutput;
-(CIContext *)context
{
if(!_context)
{
_context = [CIContext contextWithOptions:nil];
}
return _context;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)viewWillAppear:(BOOL)animated{
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
} …Run Code Online (Sandbox Code Playgroud) Core Image 让我们为 CIContext 指定颜色空间,如下所示:
let context = CIContext(options: [kCIContextOutputColorSpace: NSNull(),
kCIContextWorkingColorSpace: NSNull()])
Run Code Online (Sandbox Code Playgroud)
或者对于 CIImage,如下所示:
let image = CIImage(cvImageBuffer: inputPixelBuffer,
options: [kCIImageColorSpace: NSNull()])
Run Code Online (Sandbox Code Playgroud)
这三者有何关系:
设置它们各自的优点和缺点是什么?
我正在尝试在Apples原生相机中创建一个模仿人像模式.
问题是,对于深度数据使用CIImage应用模糊效果对于我想向用户显示的实时预览来说太慢了.
我的代码是使命是:
func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up, blurRadius: CGFloat) -> UIImage? {
let start = Date()
let invertedMask = mask.applyingFilter("CIColorInvert")
let output = image.applyingFilter("CIMaskedVariableBlur", withInputParameters: ["inputMask" : invertedMask,
"inputRadius": blurRadius])
guard let cgImage = context.createCGImage(output, from: image.extent) else {
return nil
}
let end = Date()
let elapsed = end.timeIntervalSince1970 - start.timeIntervalSince1970
print("took \(elapsed) seconds to apply blur")
return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}
Run Code Online (Sandbox Code Playgroud)
我想在GPU上应用模糊以获得更好的性能.对于此任务,我在此处找到了Apple提供的此实现
所以在Apple的实现中,我们有这段代码:
/** Applies a Gaussian blur …Run Code Online (Sandbox Code Playgroud) ciimage ×10
ios ×6
cifilter ×4
swift ×3
uiimage ×3
core-image ×2
objective-c ×2
avfoundation ×1
cgcolorspace ×1
cgimage ×1
color-space ×1
ios6 ×1
ios7 ×1
ios8 ×1
metadata ×1
metalkit ×1
photos ×1
swift3 ×1
uiimageview ×1