Swi*_*ier 5 avfoundation face-detection ios avcapturesession swift
我分辨扫描前置摄像头输入的面,检测它们,并让他们作为任务的UIImage -objects.我正在使用AVFoundation来扫描和检测面部.
像这样:
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession = AVCaptureSession()
captureSession!.addInput(input)
output = AVCaptureMetadataOutput()
captureSession?.addOutput(output)
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
output.metadataObjectTypes = [AVMetadataObjectTypeFace]
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
Run Code Online (Sandbox Code Playgroud)
在委托方法didOutputMetadataObjects我正在面对AVMetadataFaceObject并用红框高亮它像这样:
let metadataObj = metadataObjects[0] as! AVMetadataFaceObject
let faceObject = videoPreviewLayer?.transformedMetadataObjectForMetadataObject(metadataObj)
faceFrame?.frame = faceObject!.bounds
Run Code Online (Sandbox Code Playgroud)
问题是:我如何获得面部作为UIImages?
我试图跳过' didOutputSampleBuffer ',但根本没有调用它:c
小智 1
- (UIImage *) screenshot {
CGSize size = CGSizeMake(faceFrame.frame.size.width, faceFrame.frame.size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale);
CGRect rec = CGRectMake(faceFrame.frame.origin.x, faceFrame.frame.orogin.y, faceFrame.frame.size.width, faceFrame.frame.size.height);
[_viewController.view drawViewHierarchyInRect:rec afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Run Code Online (Sandbox Code Playgroud)
从上面得到一些提示
let contextImage: UIImage = <<screenshot>>!
let cropRect: CGRect = CGRectMake(x, y, width, height)
let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, cropRect)
let image: UIImage = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)!
Run Code Online (Sandbox Code Playgroud)