mah*_*udz 36 avfoundation uiimage ios
我在从CVPixelBuffer获取UIIMage时遇到一些问题.这就是我想要的:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];
UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
lv.contentMode = UIViewContentModeScaleAspectFill;
self.lockedView = lv;
[lv release];
self.lockedView.image = image;
[image release];
}
[ciImage release];
Run Code Online (Sandbox Code Playgroud)
height并且width都正确设置为相机的分辨率. image是创建但我似乎是黑色(或可能是透明的?).我不太明白问题出在哪里.任何想法,将不胜感激.
Tom*_*mmy 51
首先,与您的问题没有直接关系的显而易见的事情是:AVCaptureVideoPreviewLayer如果数据来自哪里并且您没有立即修改它的计划,那么将视频从任一相机传输到独立视图的最便宜方式.您无需自行推动,预览图层直接连接到AVCaptureSession并更新自身.
我不得不承认对这个核心问题缺乏信心.a CIImage和其他两种类型的图像之间存在语义差异- a CIImage是图像的配方,不一定由像素支持.它可以是"从这里获取像素,像这样变换,应用此过滤器,像这样变换,与其他图像合并,应用此过滤器".在CIImage您选择渲染之前,系统不知道它是什么样的.它本身并不知道光栅化它的适当界限.
UIImage声称只是包装一个CIImage.它不会将其转换为像素.大概UIImageView应该实现这一点,但如果是这样,那么我似乎无法找到你提供适当输出矩形的位置.
我有成功只是躲避这个问题:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
Run Code Online (Sandbox Code Playgroud)
有明显的机会来指定输出矩形.我确信在没有使用CGImage中间人的情况下有一条路线,所以请不要认为这个解决方案是最佳实践.
And*_* M. 21
在Swift中尝试这个.
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
if let cgImage = cgImage {
self.init(cgImage: cgImage)
} else {
return nil
}
}
}
Run Code Online (Sandbox Code Playgroud)
注意:这仅适用于RGB像素缓冲区,不适用于灰度级.
Jon*_*hon 13
获得UIImage的另一种方法.执行速度快〜10倍,至少在我的情况下:
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
int maxY = h;
for(int y = 0; y<maxY; y++) {
for(int x = 0; x<w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = buffer[offset]; // R
data[offset+1] = buffer[offset+1]; // G
data[offset+2] = buffer[offset+2]; // B
data[offset+3] = buffer[offset+3]; // A
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Run Code Online (Sandbox Code Playgroud)
bit*_*yte 11
一个现代的解决方案是
let image = UIImage(ciImage: CIImage(cvPixelBuffer: YOUR_BUFFER))
Run Code Online (Sandbox Code Playgroud)
小智 8
除非您的图像数据采用某种不同的格式,需要调整或转换 - 我建议不要增加任何内容...只需使用memcpy将数据打入您的上下文内存区域,如下所示:
//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
void *ctxData = CGBitmapContextGetData(c);
// MUST READ-WRITE LOCK THE PIXEL BUFFER!!!!
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(ctxData, pxData, 4 * w * h);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
... and so on...
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
33183 次 |
| 最近记录: |