aru*_*rul 7 opengl-es objective-c ios avassetreader
我正在尝试在iOS上播放视频(MP4/H.263),但结果非常模糊.这是初始化资产阅读的代码:
mTextureHandle = [self createTexture:CGSizeMake(400,400)];
NSURL * url = [NSURL fileURLWithPath:file];
mAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];
NSArray * tracks = [mAsset tracksWithMediaType:AVMediaTypeVideo];
mTrack = [tracks objectAtIndex:0];
NSLog(@"Tracks: %i", [tracks count]);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = [[NSDictionary alloc] initWithObjectsAndKeys:value, key, nil];
mOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:mTrack outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:mAsset error:nil];
[mReader addOutput:mOutput];
Run Code Online (Sandbox Code Playgroud)
对于读者init来说,现在是实际的纹理:
CMSampleBufferRef sampleBuffer = [mOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
glBindTexture(GL_TEXTURE_2D, mTextureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 600, 400, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress( pixelBuffer ));
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CFRelease(sampleBuffer);
Run Code Online (Sandbox Code Playgroud)
一切都很好......除了渲染的图像看起来像这样; 切片和歪斜?

我甚至试图寻找AVAssetTrack首选的转换矩阵,但无济于事,因为它总会回归CGAffineTransformIdentity.
侧注:如果我将源切换到相机,图像会变得很好.我错过了一些减压步骤吗?这不应该由资产读者处理吗?
谢谢!
我认为CMSampleBuffer出于性能原因使用填充,因此您需要为纹理选择合适的宽度.
尝试使用以下命令设置纹理的宽度:CVPixelBufferGetBytesPerRow(pixelBuffer)/ 4(如果您的视频格式使用每像素4个字节,则更改为其他)
| 归档时间: |
|
| 查看次数: |
5715 次 |
| 最近记录: |