我多次使用CIDetector如下:
-(NSArray *)detect:(UIImage *)inimage
{
UIImage *inputimage = inimage;
UIImageOrientation exifOrientation = inimage.imageOrientation;
NSNumber *orientation = [NSNumber numberWithInt:exifOrientation];
NSDictionary *imageOptions = [NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation];
CIImage* ciimage = [CIImage imageWithCGImage:inputimage.CGImage options:imageOptions];
NSDictionary *detectorOptions = [NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation];
NSArray* features = [self.detector featuresInImage:ciimage options:detectorOptions];
if (features.count == 0)
{
PXLog(@"no face found");
}
ciimage = nil;
NSMutableArray *returnArray = [NSMutableArray new];
for(CIFaceFeature *feature in features)
{
CGRect rect = feature.bounds;
CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
FaceFeatures * ff = …Run Code Online (Sandbox Code Playgroud) 更新到Xcode 7并在操作中呈现图像时获取此(警告?)消息:
对于支持数据提供者的CGImageRef,CreateWrappedSurface()失败.
在Xcode 6.4下没有这样的消息.
得到了哪个代码部分抛出了消息:
if (!self.originalImage) // @property (nonatomic, strong) UIImage *originalImage;
return;
CGImageRef originalCGImage = self.originalImage.CGImage;
NSAssert(originalCGImage, @"Cannot get CGImage from original image");
CIImage *inputCoreImage = [CIImage imageWithCGImage:originalCGImage]; // this results the console message
Run Code Online (Sandbox Code Playgroud)
我替换了我的CIIImage创建者直接从UIImage获取它:
CIImage *originalCIImage = self.originalImage.CIImage;
NSAssert(originalCIImage, @"Cannot build CIImage from original image");
Run Code Online (Sandbox Code Playgroud)
在这种情况下,我没有得到任何控制台消息,但有一个断言:originalCIImage是零.
UIImage的类引用说:
@property(nonatomic,readonly)CIImage*CIImage
如果使用CGImageRef初始化UIImage对象,则属性的值为nil.
所以我使用原始代码作为后备:
CIImage *originalCIImage = self.originalImage.CIImage;
if (!originalCIImage) {
CGImageRef originalCGImageRef = self.originalImage.CGImage;
NSAssert(originalCGImageRef, @"Unable to get CGimageRef of originalImage");
originalCIImage = [CIImage imageWithCGImage:originalCGImageRef];
}
NSAssert(originalCIImage, @"Cannot …Run Code Online (Sandbox Code Playgroud) 对SceneKit来说非常新,所以只需在这里寻求帮助:
我有一个SCNSphere,它的中心有一个摄像头
我创建了一个SCNMaterial,doubleSided,并将其分配给球体
由于相机位于中央,因此图像看起来垂直翻转,当内部文本完全混乱时.
那么如何翻转素材或图像(虽然后来它将是视频中的帧),欢迎任何其他建议.
这个解决方案顺便说一下,我失败了,normalImage作为一种材料应用(但是从球体内部看时图像被翻转),但是分配flippedImage结果没有任何材料(白色屏幕)
let normalImage = UIImage(named: "text2.png")
let ciimage = CIImage(CGImage: normalImage!.CGImage!)
let flippeCIImage = ciimage.imageByApplyingTransform(CGAffineTransformMakeScale(-1, 1))
let flippedImage = UIImage(CIImage: flippeCIImage, scale: 1.0, orientation: .Left)
sceneMaterial.diffuse.contents = flippedImage
sceneMaterial.specular.contents = UIColor.whiteColor()
sceneMaterial.doubleSided = true
sceneMaterial.shininess = 0.5
Run Code Online (Sandbox Code Playgroud) 我正在创建一个应用程序,需要实时应用过滤器到图像.将UIImagea 转换为a CIImage,并应用滤镜都是非常快速的操作,但将创建的CIImage背面转换为a CGImageRef并显示图像需要很长时间(1/5秒,如果编辑需要真实,这实际上很多-时间).
图像大约2500 x 2500像素,这很可能是问题的一部分
目前,我正在使用
let image: CIImage //CIImage with applied filters
let eagl = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let context = CIContext(EAGLContext: eagl, options: [kCIContextWorkingColorSpace : NSNull()])
//this line takes too long for real-time processing
let cg: CGImage = context.createCGImage(image, fromRect: image.extent)
Run Code Online (Sandbox Code Playgroud)
我已经考虑过使用了 EAGLContext.drawImage()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
Run Code Online (Sandbox Code Playgroud)
然而,我找不到任何关于如何做到这一点的可靠文档,或者它是否会更快
是否有更快的方式来显示CIImage屏幕(在a中UIImageView,或直接在a上CALayer)?我想避免过多地降低图像质量,因为这可能对用户来说是显而易见的.
我目前正在制作照片编辑应用.
当用户选择照片时,使用以下代码将其自动转换为黑白照片:
func blackWhiteImage(image: UIImage) -> Data {
print("Starting black & white")
let orgImg = CIImage(image: image)
let bnwImg = orgImg?.applyingFilter("CIColorControls", withInputParameters: [kCIInputSaturationKey:0.0])
let outputImage = UIImage(ciImage: bnwImg!)
print("Black & white complete")
return UIImagePNGRepresentation(outputImage)!
}
Run Code Online (Sandbox Code Playgroud)
我对此代码的问题是我一直收到此错误:
fatal error: unexpectedly found nil while unwrapping an Optional value
Run Code Online (Sandbox Code Playgroud)
我的代码略有不同,但是当它到达该UIImagePNG/JPEGRepresentation(xx)部分时它仍然会中断.
是否有任何方法可以从CIImage中获取PNG或JPEG数据,以便在图像视图/ UIImage中使用?
任何其他方法都没有详细说明应该使用什么代码.
uiimage uiimagejpegrepresentation uiimagepngrepresentation ciimage swift
我面临的几个问题与使用iOS9 SDK进行裁剪有关.
我有以下代码来调整图像大小(通过在中间裁剪从4:3转换为16:9).这曾经很好地工作到iOS8 SDK.在iOS 9中,底部区域为空白.
(CMSampleBufferRef)resizeImage:(CMSampleBufferRef) sampleBuffer {
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
int target_width = CVPixelBufferGetWidth(imageBuffer);
int target_height = CVPixelBufferGetHeight(imageBuffer);
int height = CVPixelBufferGetHeight(imageBuffer);
int width = CVPixelBufferGetWidth(imageBuffer);
int x=0, y=0;
// Convert 16:9 to 4:3
if (((target_width*3)/target_height) == 4)
{
target_height = ((target_width*9)/16);
target_height = ((target_height + 15) / 16) * 16;
y = (height - target_height)/2;
}
else
if ((target_width == 352) && (target_height == 288))
{
target_height = ((target_width*9)/16);
target_height = ((target_height + 15) / …Run Code Online (Sandbox Code Playgroud) 我想把一个转换CGImage成一个CIImage; 但它没有用.
这行代码:
let personciImage = CIImage(CGImage: imageView.image!.CGImage!)
Run Code Online (Sandbox Code Playgroud)
抛出以下错误
模糊地使用'init(CGImage)'
我真的很困惑这个错误意味着什么.
我需要做这个转换,因为CIDetector.featuresInImage()从内置CoreImage框架需要一个CIImage
此代码示例在macOS playground中运行:
import Cocoa
import XCPlayground
func getResImg(name: String, ext: String) -> CIImage {
guard let fileURL = Bundle.main.url(forResource: name, withExtension: ext) else {
fatalError("can't find image")
}
guard let img = CIImage(contentsOf: fileURL) else {
fatalError("can't load image")
}
return img
}
var img = getResImg(name: "noise", ext: "jpg")
Run Code Online (Sandbox Code Playgroud)
升级到Swift 4.1后它没有.错误:无法获取此NSImage的位图表示.
它现在如何在Swift 4.1中运行?
我正在制作一个快速的视频应用程序。
在我的应用程序中,我需要裁剪和水平翻转 CVPixelBuffer 并返回类型也是 CVPixelBuffer 的结果。
我尝试了几件事。
首先,我使用了“CVPixelBufferCreateWithBytes”
func resizePixelBuffer(_ pixelBuffer: CVPixelBuffer, destSize: CGSize)
-> CVPixelBuffer?
{
CVPixelBufferLockAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: O))
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
var destPixelBuffer: CVPixelBuffer?
let topMargin = (height - destsize.height) / 2
let leftMargin = (width - destsize.width) / 2 * 4 // bytesPerPixel
let offset = topMargin * bytesPerRow + leftMargin
CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
destSize.width,
destSize.height,
pixelFormat,
baseAddress.advanced(by: offset),
bytesPerRow,
nil, nil, …Run Code Online (Sandbox Code Playgroud) 我正在尝试通过平均其他几个图像来创建图像。为了实现这一点,我首先将每个图像变暗一个与我平均的图像数量相等的因子:
func darkenImage(by multiplier: CGFloat) -> CIImage? {
let divImage = CIImage(color: CIColor(red: multiplier, green: multiplier, blue: multiplier))
let divImageResized = divImage.cropped(to: self.extent) //Set multiplier image to same size as image to be darkened
if let divFilter = CIFilter(name: "CIMultiplyBlendMode", parameters: ["inputImage":self, "inputBackgroundImage":divImageResized]) {
return divFilter.outputImage
}
print("Failed to darken image")
return nil
}
Run Code Online (Sandbox Code Playgroud)
在此之后,我将每个变暗的图像添加在一起(将图像 1 和 2 添加在一起,然后将结果与图像 3 等一起添加):
func blend(with image: CIImage, blendMode: BlendMode) -> CIImage? {
if let filter = CIFilter(name: blendMode.format) { //blendMode.format is CIAdditionCompositing …Run Code Online (Sandbox Code Playgroud)