sot*_*ips 23 iphone opengl-es image-processing ios opengl-es-2.0
在高层次上,我创建了一个应用程序,让用户可以指向他或她的iPhone相机并查看已经过视觉效果处理过的视频帧.此外,用户可以点击按钮将当前预览的定格作为保存在其iPhone库中的高分辨率照片.
为此,该应用程序遵循以下过程:
1)创建AVCaptureSession
captureSession = [[AVCaptureSession alloc] init];
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
Run Code Online (Sandbox Code Playgroud)
2)使用后置摄像头连接AVCaptureDeviceInput.
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
[captureSession addInput:videoInput];
Run Code Online (Sandbox Code Playgroud)
3)将AVCaptureStillImageOutput连接到会话,以便能够以Photo分辨率捕获静止帧.
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[captureSession addOutput:stillOutput];
Run Code Online (Sandbox Code Playgroud)
4)将AVCaptureVideoDataOutput连接到会话,以便能够以较低分辨率捕获单个视频帧(CVImageBuffers)
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[captureSession addOutput:videoOutput];
Run Code Online (Sandbox Code Playgroud)
5)在捕获视频帧时,将每个新帧作为CVImageBuffer调用委托的方法:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
[self.delegate processNewCameraFrame:pixelBuffer];
}
Run Code Online (Sandbox Code Playgroud)
6)然后委托处理/绘制它们:
- (void)processNewCameraFrame:(CVImageBufferRef)cameraFrame {
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glClear(GL_COLOR_BUFFER_BIT);
glGenTextures(1, &videoFrameTexture_);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
glBindBuffer(GL_ARRAY_BUFFER, [self vertexBuffer]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, [self indexBuffer]);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
[[self context] presentRenderbuffer:GL_RENDERBUFFER];
glDeleteTextures(1, &videoFrameTexture_);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
Run Code Online (Sandbox Code Playgroud)
这一切都有效,并导致正确的结果.我可以通过OpenGL看到640x480的视频预览.它看起来像这样:

但是,如果我从此会话中捕获静止图像,其分辨率也将为640x480.我希望它是高分辨率,所以在第一步中我将预设线更改为:
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
Run Code Online (Sandbox Code Playgroud)
这可以正确捕获iPhone4(2592x1936)的最高分辨率的静止图像.
但是,视频预览(由代理人在步骤5和6中收到)现在看起来像这样:

我已经确认所有其他预设(高,中,低,640x480和1280x720)预览.但是,Photo预设似乎以不同的格式发送缓冲区数据.
我还确认,通过获取缓冲区并从中创建UIImage而不是将其发送到openGL,发送到Photo预设的缓冲区的数据实际上是有效的图像数据:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(cameraFrame), bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *anImage = [UIImage imageWithCGImage:cgImage];
Run Code Online (Sandbox Code Playgroud)
这显示了未失真的视频帧.
我做了一堆搜索,似乎无法修复它.我的预感是这是一个数据格式问题.也就是说,我相信缓冲区设置正确,但是这行格式不明白:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
Run Code Online (Sandbox Code Playgroud)
我的预感是将外部格式从GL_BGRA更改为其他内容会有所帮助,但它不会......并且通过各种方式看起来缓冲区实际上在GL_BGRA中.
有谁知道这里发生了什么?或者你有任何关于如何调试为什么会这样做的提示?(超级奇怪的是,这发生在iphone4上,而不是iPhone 3GS上......运行ios4.3)
Dex*_*Dex 13
这真是太过分了.
正如Lio Ben-Kereth指出的那样,从调试器中可以看到填充是48
(gdb) po pixelBuffer
<CVPixelBuffer 0x2934d0 width=852 height=640 bytesPerRow=3456 pixelFormat=BGRA
# => 3456 - 852 * 4 = 48
Run Code Online (Sandbox Code Playgroud)
OpenGL可以弥补这一点,但OpenGL ES 不能(这里有更多信息openGL SubTexturing)
所以这就是我在OpenGL ES中的表现:
(CVImageBufferRef)pixelBuffer // pixelBuffer containing the raw image data is passed in
/* ... */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture_);
int frameWidth = CVPixelBufferGetWidth(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow, extraBytes;
bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
extraBytes = bytesPerRow - frameWidth*4;
GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
if ( [[captureSession sessionPreset] isEqualToString:@"AVCaptureSessionPresetPhoto"] )
{
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL );
for( int h = 0; h < frameHeight; h++ )
{
GLubyte *row = pixelBufferAddr + h * (frameWidth * 4 + extraBytes);
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, h, frameWidth, 1, GL_BGRA, GL_UNSIGNED_BYTE, row );
}
}
else
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, frameWidth, frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);
}
Run Code Online (Sandbox Code Playgroud)
之前,我正在使用AVCaptureSessionPresetMedium并获得30fps.在AVCaptureSessionPresetPhoto我上的iPhone 4获得16fps的子质感的循环似乎并不影响帧速率.
我在iOS 5上使用iPhone 4.
小智 5
就这样画.
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
int frameHeight = CVPixelBufferGetHeight(pixelBuffer);
GLubyte *pixelBufferAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)bytesPerRow / 4, (GLsizei)frameHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixelBufferAddr);
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
10532 次 |
| 最近记录: |