Mat*_*gan 8 c iphone 3d opengl-es
最终,我正在寻找创建一个可以将视频转换为黑白的着色器(然后应用其他一些我不确定是否应该透露的效果),只是因为在CPU上执行此操作让我感觉很好每秒1帧.
无论如何,就目前而言,我只想将视频帧显示在屏幕上.我可以在屏幕上绘制三角形,所以我知道我的OpenGL视图工作正常,我从中获取NSLogs
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
Run Code Online (Sandbox Code Playgroud)
方法.这个方法是我试图完成所有绘图的地方.不幸的是,我做错了什么......相机框架没有画画.
这是我的简单顶点着色器:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Run Code Online (Sandbox Code Playgroud)
(我知道它正在编译,并且再次工作,因为我能够渲染的原语.)
这是我的简单片段着色器:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main()
{
gl_FragColor = texture2D(videoFrame, textureCoordinate);
}
Run Code Online (Sandbox Code Playgroud)
,......这是我试图把它们放在一起的地方,哈哈:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSLog(@"Frame...");
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
//these er, have to be set
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//set the image for the currently bound texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
// Update uniform values
glUniform1i(videoFrameUniform, 0);
glVertexAttribPointer(0, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
}
Run Code Online (Sandbox Code Playgroud)
不行.
任何帮助都将非常感激,我在想法和不知所措.提前致谢!
**编辑:这是我用来设置视图的代码,加载OpenGL并启动捕获会话.
- (id)initWithFrame:(CGRect)frame {
NSLog(@"Yo.");
self = [super initWithFrame:frame];
if (self) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer*)[super layer];
[eaglLayer setOpaque: YES];
[eaglLayer setFrame: [self bounds]];
[eaglLayer setContentsScale: 2.0];
glContext = [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2];
if(!glContext || ![EAGLContext setCurrentContext: glContext]) {
[self release];
return nil;
}
//endable 2D textures
glEnable(GL_TEXTURE_2D);
//generates the frame and render buffers at the pointer locations of the frameBuffer and renderBuffer variables
glGenFramebuffers(1, &frameBuffer);
glGenRenderbuffers(1, &renderBuffer);
//binds the frame and render buffers, they can now be modified or consumed by later openGL calls
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
//generate storeage for the renderbuffer (Wouldn't be used for offscreen rendering, glRenderbufferStorage() instead)
[glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable: eaglLayer];
//attaches the renderbuffer to the framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBuffer);
//sets up the coordinate system
glViewport(0, 0, frame.size.width, frame.size.height);
//|||||||||||||||--Remove this stuff later--||||||||||||||//
//create the vertex and fragement shaders
GLint vertexShader, fragmentShader;
vertexShader = glCreateShader(GL_VERTEX_SHADER);
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
//get their source paths, and the source, store in a char array
NSString *vertexShaderPath = [[NSBundle mainBundle] pathForResource: @"testShader" ofType: @"vsh"];
NSString *fragmentShaderPath = [[NSBundle mainBundle] pathForResource: @"testShader" ofType: @"fsh"];
const GLchar *vertexSource = (GLchar *)[[NSString stringWithContentsOfFile: vertexShaderPath encoding: NSUTF8StringEncoding error: nil] UTF8String];
const GLchar *fragmentSource = (GLchar *)[[NSString stringWithContentsOfFile: fragmentShaderPath encoding: NSUTF8StringEncoding error: nil] UTF8String];
NSLog(@"\n--- Vertex Source ---\n%s\n--- Fragment Source ---\n%s", vertexSource, fragmentSource);
//associate the source strings with each shader
glShaderSource(vertexShader, 1, &vertexSource, NULL);
glShaderSource(fragmentShader, 1, &fragmentSource, NULL);
//compile the vertex shader, check for errors
glCompileShader(vertexShader);
GLint compiled;
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &compiled);
if(!compiled) {
GLint infoLen = 0;
glGetShaderiv(vertexShader, GL_INFO_LOG_LENGTH, &infoLen);
GLchar *infoLog = (GLchar *)malloc(sizeof(GLchar) * infoLen);
glGetShaderInfoLog(vertexShader, infoLen, NULL, infoLog);
NSLog(@"\n--- Vertex Shader Error ---\n%s", infoLog);
free(infoLog);
}
//compile the fragment shader, check for errors
glCompileShader(fragmentShader);
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &compiled);
if(!compiled) {
GLint infoLen = 0;
glGetShaderiv(fragmentShader, GL_INFO_LOG_LENGTH, &infoLen);
GLchar *infoLog = (GLchar *)malloc(sizeof(GLchar) * infoLen);
glGetShaderInfoLog(fragmentShader, infoLen, NULL, infoLog);
NSLog(@"\n--- Fragment Shader Error ---\n%s", infoLog);
free(infoLog);
}
//create a program and attach both shaders
testProgram = glCreateProgram();
glAttachShader(testProgram, vertexShader);
glAttachShader(testProgram, fragmentShader);
//bind some attribute locations...
glBindAttribLocation(testProgram, 0, "position");
glBindAttribLocation(testProgram, 1, "inputTextureCoordinate");
//link and use the program, make sure it worked :P
glLinkProgram(testProgram);
glUseProgram(testProgram);
GLint linked;
glGetProgramiv(testProgram, GL_LINK_STATUS, &linked);
if(!linked) {
GLint infoLen = 0;
glGetProgramiv(testProgram, GL_INFO_LOG_LENGTH, &infoLen);
GLchar *infoLog = (GLchar *)malloc(sizeof(GLchar) * infoLen);
glGetProgramInfoLog(testProgram, infoLen, NULL, infoLog);
NSLog(@"%s", infoLog);
free(infoLog);
}
videoFrameUniform = glGetUniformLocation(testProgram, "videoFrame");
#if(!TARGET_IPHONE_SIMULATOR)
//holding an error
NSError *error = nil;
//create a new capture session, set the preset, create + add the video camera input
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
AVCaptureDevice *videoCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCamera error:&error];
[captureSession addInput:videoInput];
//set up the data ouput object, tell it to discard late video frames for no lag
AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
dataOutput.alwaysDiscardsLateVideoFrames = YES;
//create a new dispatch queue for all the sample buffers to be called into.grapher
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[dataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
//set some settings on the video data output
NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey;
NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[dataOutput setVideoSettings:videoSettings];
//add the data output
[captureSession addOutput:dataOutput];
//start the capture session running
[captureSession startRunning];
#endif
//|||||||||||||||--Remove this stuff later--||||||||||||||//
//draw the view
[self drawView];
}
return self;
}
-(void)drawView {
//set what color clear is, and then clear the buffer
glClearColor(0.2f, 0.589f, 0.12f, 1);
glClear(GL_COLOR_BUFFER_BIT); //HUH?
[glContext presentRenderbuffer: GL_RENDERBUFFER];
}
Run Code Online (Sandbox Code Playgroud)
Bra*_*son 13
我的这个示例应用程序有三个着色器,可以对屏幕执行各种级别的处理和相机帧显示.我在这里的帖子以及我在iTunes U上的类的OpenGL ES 2.0会话中解释了这个应用程序是如何工作的.
事实上,这里的着色器看起来是我用于直接显示视频帧的直接副本,因此您可能已经将该应用程序用作模板.我假设我的示例应用程序在您的设备上运行得很好?
如果是这样,那么起始样本和您的应用程序之间必须存在一些差异.您似乎通过将我在-drawFrame
方法中使用的一些代码拉到委托方法的末尾来简化我的示例,这应该可以正常工作,所以这不是问题.我假设您的OpenGL设置的其余部分与我在该示例中的设置相同,因此场景配置正确.
查看我的代码并将其与您发布的内容进行比较,我能看到的所有内容都不同,这是glUseProgram()
代码中缺少的调用.如果您已经在代码中正确编译并链接了着色器程序,而不是您在此处显示的内容,则只需glUseProgram()
在更新统一值之前使用某处.
此外,您正在绑定渲染缓冲区,但您可能需要
[context presentRenderbuffer:GL_RENDERBUFFER];
Run Code Online (Sandbox Code Playgroud)
在你的最后一行之后确保内容到达屏幕(context
你的EAGLContext实例在哪里).
归档时间: |
|
查看次数: |
8697 次 |
最近记录: |