我在iPhone上使用OpenCV 2.2来检测面部.我正在使用IOS 4的AVCaptureSession来访问摄像机流,如下面的代码所示.
我的挑战是视频帧以CVBufferRef(指向CVImageBuffer)对象的形式出现,它们以480px宽,300px高的方式呈现为风景.如果您将手机侧向握住,这很好,但是当手机处于直立位置时,我想将这些框架顺时针旋转90度,以便OpenCV可以正确找到面部.
我可以将CVBufferRef转换为CGImage,然后转换为UIImage,然后旋转,就像这个人正在做的那样:旋转从视频帧中获取的CGImage
然而,这浪费了很多CPU.我正在寻找一种更快速旋转图像的方法,如果可能的话,理想情况下使用GPU进行处理.
有任何想法吗?
伊恩
代码示例:
-(void) startCameraCapture {
// Start up the face detector
faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"];
// Create the AVCapture Session
session = [[AVCaptureSession alloc] init];
// create a preview layer to show the output from the camera
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.frame = previewView.frame;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[previewView.layer addSublayer:previewLayer];
// Get the default camera device
AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a AVCaptureInput with the camera …Run Code Online (Sandbox Code Playgroud) 我正在iOS应用中录制实时视频.在另一个StackOverflow页面(链接)上,我发现你可以使用vImage_Buffer来处理我的帧.
问题是我不知道如何从输出的vImage_buffer返回CVPixelBufferRef.
以下是另一篇文章中给出的代码:
NSInteger cropX0 = 100,
cropY0 = 100,
cropHeight = 100,
cropWidth = 100,
outWidth = 480,
outHeight = 480;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0*bytesPerRow+4*cropX0;
inBuff.data = baseAddress+startpos;
unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error …Run Code Online (Sandbox Code Playgroud) 我正在使用glReadPixels将数据读入CVPixelBufferRef.我用它CVPixelBufferRef作为输入AVAssetWriter.不幸的是,像素格式似乎不匹配.
我认为glReadPixels返回RGBA格式的AVAssetWriter像素数据,同时想要ARGB格式的像素数据.将RGBA转换为ARGB的最佳方法是什么?
这是我到目前为止所尝试的:
CGImageRef作为中间步骤位操作不起作用,因为CVPixelBufferRef似乎不支持下标.该CGImageRef中间步骤做的工作......但我不希望有代码50条额外的线路,有可能正影响性能.
我正在尝试将 CVPixelBuffer 大小调整为 128x128。我正在使用 750x750 的分辨率。我目前正在使用 CVPixelBuffer 创建一个新的 CGImage,调整大小然后转换回 CVPixelBuffer。这是我的代码:
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
let imageRect = CGRect(x: 0, y: 0, width: 128, height: 128)
if let image = context.createCGImage(ciImage, from: imageRect) {
let t = CIImage(cgImage: image)
let new = t.applying(transformation)
context.render(new, to: pixelBuffer)
return UIImage(cgImage: image, scale: UIScreen.main.scale, orientation: .right)
}
}
return nil
}
Run Code Online (Sandbox Code Playgroud)
我还尝试缩放 CIImage 然后转换它:
let t = …Run Code Online (Sandbox Code Playgroud) 我正在制作一个快速的视频应用程序。
在我的应用程序中,我需要裁剪和水平翻转 CVPixelBuffer 并返回类型也是 CVPixelBuffer 的结果。
我尝试了几件事。
首先,我使用了“CVPixelBufferCreateWithBytes”
func resizePixelBuffer(_ pixelBuffer: CVPixelBuffer, destSize: CGSize)
-> CVPixelBuffer?
{
CVPixelBufferLockAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: O))
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
var destPixelBuffer: CVPixelBuffer?
let topMargin = (height - destsize.height) / 2
let leftMargin = (width - destsize.width) / 2 * 4 // bytesPerPixel
let offset = topMargin * bytesPerRow + leftMargin
CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
destSize.width,
destSize.height,
pixelFormat,
baseAddress.advanced(by: offset),
bytesPerRow,
nil, nil, …Run Code Online (Sandbox Code Playgroud) 看起来无论是什么AVVideoWidthKey,AVVideoHeightKey,AVVideoCleanApertureWidthKey,AVVideoCleanApertureHeightKey我选择,我的视频分辨率为320×240两种或480x360.
我正在尝试以480p保存视频,所有缓冲区都是640x480,我的会话处于AVCaptureSessionPreset640x480,所有内容都是640x480,但我的输出视频仍然按比例缩小.
我正在使用AVAssetWriterInputPixelBufferAdaptor,CMSampleBufferRef而我已经使用了它,它是640x480.
我已经查看了Stack Overflow,但我还没有发现这个问题.:/
ios ×5
swift ×2
avfoundation ×1
cgimage ×1
ciimage ×1
core-image ×1
crop ×1
glreadpixels ×1
opencv ×1
swift3 ×1
video ×1
vimage ×1