vod*_*ang 22 iphone objective-c avfoundation ios cmsamplebufferref
我正在使用AVFoundation并从中获取样本缓冲区AVCaptureVideoDataOutput,我可以使用以下命令将其直接写入videoWriter:
- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(self.videoWriter.status != AVAssetWriterStatusWriting)
{
[self.videoWriter startWriting];
[self.videoWriter startSessionAtSourceTime:lastSampleTime];
}
[self.videoWriterInput appendSampleBuffer:sampleBuffer];
}
Run Code Online (Sandbox Code Playgroud)
我现在要做的是在CMSampleBufferRef中裁剪和缩放图像,而不将其转换为UIImage或CGImageRef,因为这会降低性能.
Ste*_*ten 32
如果使用vimage,则可以直接处理缓冲区数据,而无需将其转换为任何图像格式.
outImg包含裁剪和缩放的图像数据.outWidth和cropWidth之间的关系设置缩放.

int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0*bytesPerRow+4*cropX0;
inBuff.data = baseAddress+startpos;
unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error %ld", err);
Run Code Online (Sandbox Code Playgroud)
因此,将cropX0 = 0和cropY0 = 0以及cropWidth和cropHeight设置为原始大小意味着不进行裁剪(使用整个原始图像).设置outWidth = cropWidth和outHeight = cropHeight会导致无缩放.请注意,inBuff.rowBytes应始终是完整源缓冲区的长度,而不是裁剪长度.
您可以考虑使用CoreImage(5.0+).
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]];
ciImage = [[ciImage imageByApplyingTransform:myScaleTransform] imageByCroppingToRect:myRect];
Run Code Online (Sandbox Code Playgroud)
注意:我没有注意到原始问题也要求缩放.但无论如何,对于那些只需要裁剪CMSampleBuffer的人来说,这就是解决方案.
缓冲区只是一个像素数组,因此您可以直接处理缓冲区而无需使用vImage.代码是用Swift编写的,但我认为很容易找到Objective-C的等价物.
首先,确保您的CMSampleBuffer是BGRA格式.如果没有,您使用的预设可能是YUV,并破坏以后将使用的每行的字节数.
dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [
String(kCVPixelBufferPixelFormatTypeKey):
NSNumber(value: kCVPixelFormatType_32BGRA)
]
Run Code Online (Sandbox Code Playgroud)
然后,当你得到样本缓冲区时:
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(imageBuffer, .readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let cropWidth = 640
let cropHeight = 640
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
// now the cropped image is inside the context.
// you can convert it back to CVPixelBuffer
// using CVPixelBufferCreateWithBytes if you want.
CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly)
// create image
let cgImage: CGImage = context!.makeImage()!
let image = UIImage(cgImage: cgImage)
Run Code Online (Sandbox Code Playgroud)
如果要从某个特定位置裁剪,请添加以下代码:
// calculate start position
let bytesPerPixel = 4
let startPoint = [ "x": 10, "y": 10 ]
let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel
Run Code Online (Sandbox Code Playgroud)
和改变baseAddress在CGContext()成startAddress.确保不要超过原始图像的宽度和高度.
| 归档时间: |
|
| 查看次数: |
20138 次 |
| 最近记录: |