use*_*509 13 objective-c avfoundation calayer ios avcapturesession
我有一个使用AV基金会的照片应用程序.我使用AVCaptureVideoPreviewLayer设置了一个预览图层,它占据了屏幕的上半部分.因此,当用户试图拍摄他们的照片时,他们只能看到屏幕的上半部分看到的内容.
这很好用,但是当用户实际拍摄照片并尝试将照片设置为图层的内容时,图像会失真.我做了研究并意识到我需要裁剪图像.
我想做的就是裁剪完整捕获的图像,这样剩下的就是用户最初在屏幕上半部分看到的内容.
我已经能够完成这个,但我通过输入手动CGRect值来做到这一点,但它看起来仍然不完美.必须有一种更简单的方法来做到这一点.
在过去的两天里,关于裁剪图像,我已经完成了关于堆栈溢出的每个帖子,但没有任何工作.
必须有一种方法可以以编程方式裁剪捕获的图像,以便最终图像与预览图层中最初看到的完全相同.
这是我的viewDidLoad实现:
- (void)viewDidLoad
{
[super viewDidLoad];
AVCaptureSession *session =[[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = [[NSError alloc]init];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if([session canAddInput:deviceInput])
[session addInput:deviceInput];
CALayer *rootLayer = [[self view]layer];
[rootLayer setMasksToBounds:YES];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
[_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[rootLayer insertSublayer:_previewLayer atIndex:0];
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:_stillImageOutput];
[session startRunning];
}
Run Code Online (Sandbox Code Playgroud)
以下是用户按下按钮捕获照片时运行的代码:
-(IBAction)stillImageCapture {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in _stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(@"about to request a capture from: %@", _stillImageOutput);
[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
CALayer *subLayer = [CALayer layer];
subLayer.frame = _previewLayer.frame;
image = [self rotate:image andOrientation:image.imageOrientation];
//Below is the crop that is sort of working for me, but as you can see I am manually entering in values and just guessing and it still does not look perfect.
CGRect cropRect = CGRectMake(0, 650, 3000, 2000);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
subLayer.frame = _previewLayer.frame;
[_previewLayer addSublayer:subLayer];
}
}];
}
Run Code Online (Sandbox Code Playgroud)
Cab*_*bus 19
看看AVCaptureVideoPreviewLayers
-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect
Run Code Online (Sandbox Code Playgroud)
此方法可让您轻松地将图层的可见CGRect转换为实际的相机输出.
一个警告:物理相机没有安装在"顶部朝上",而是顺时针旋转90度.(因此,如果你拿着你的iPhone - Home Button,那么相机实际上是正面朝上).
记住这一点,你必须转换上面方法给你的CGRect,将图像裁剪到屏幕上的确切内容.
例:
CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame];
CGSize originalSize = [originalImage size];
if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) {
// For portrait images, swap the size of the image, because
// here the output image is actually rotated relative to what you see on screen.
CGFloat temp = originalSize.width;
originalSize.width = originalSize.height;
originalSize.height = temp;
}
// metaRect is fractional, that's why we multiply here
CGRect cropRect;
cropRect.origin.x = metaRect.origin.x * originalSize.width;
cropRect.origin.y = metaRect.origin.y * originalSize.height;
cropRect.size.width = metaRect.size.width * originalSize.width;
cropRect.size.height = metaRect.size.height * originalSize.height;
cropRect = CGRectIntegral(cropRect);
Run Code Online (Sandbox Code Playgroud)
这可能有点令人困惑,但让我真正理解的是:
保持你的设备"Home Button right" - >你会看到x轴实际上位于iPhone的"高度",而y轴位于iPhone的"宽度".这就是为什么对于肖像图像,你必须交换大小;)
@Cabus有一个有效的解决方案,你应该对他的答案进行投票.但是,我在Swift中使用以下内容编写了自己的版本:
// The image returned in initialImageData will be larger than what
// is shown in the AVCaptureVideoPreviewLayer, so we need to crop it.
let image : UIImage = UIImage(data: initialImageData)!
let originalSize : CGSize
let visibleLayerFrame = self.previewView!.bounds // THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
// Calculate the fractional size that is shown in the preview
let metaRect : CGRect = (self.videoPreviewLayer?.metadataOutputRectOfInterestForRect(visibleLayerFrame))!
if (image.imageOrientation == UIImageOrientation.Left || image.imageOrientation == UIImageOrientation.Right) {
// For these images (which are portrait), swap the size of the
// image, because here the output image is actually rotated
// relative to what you see on screen.
originalSize = CGSize(width: image.size.height, height: image.size.width)
}
else {
originalSize = image.size
}
// metaRect is fractional, that's why we multiply here.
let cropRect : CGRect = CGRectIntegral(
CGRect( x: metaRect.origin.x * originalSize.width,
y: metaRect.origin.y * originalSize.height,
width: metaRect.size.width * originalSize.width,
height: metaRect.size.height * originalSize.height))
let finalImage : UIImage =
UIImage(CGImage: CGImageCreateWithImageInRect(image.CGImage, cropRect)!,
scale:1,
orientation: image.imageOrientation )
Run Code Online (Sandbox Code Playgroud)