我的应用只是风景.我将这样呈现AVCaptureVideoPreviewLayer:
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[self.previewLayer setBackgroundColor:[[UIColor blackColor] CGColor]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
NSLog(@"previewView: %@", self.previewView);
CALayer *rootLayer = [self.previewView layer];
[rootLayer setMasksToBounds:YES];
[self.previewLayer setFrame:[rootLayer bounds]];
NSLog(@"previewlayer: %f, %f, %f, %f", self.previewLayer.frame.origin.x, self.previewLayer.frame.origin.y, self.previewLayer.frame.size.width, self.previewLayer.frame.size.height);
[rootLayer addSublayer:self.previewLayer];
[session startRunning];
Run Code Online (Sandbox Code Playgroud)
self.previewView的框架为(0,0,568,320),这是正确的.self.previewLayer记录一个(0,0,568,320)的帧,这在理论上是正确的.但是,相机显示在横向屏幕中间显示为纵向矩形,并且相机预览图像的方向错误90度.我究竟做错了什么?我需要相机预览层出现在全屏幕,在横向模式下,图像应正确导向.
在这里使用本教程:http://www.musicalgeometry.com/? p = 1297我使用AVCaptureSession创建了自定义叠加和图像捕获.
我试图让用户在前后摄像头之间切换.这是我在CaptureSessionManager中切换摄像头的代码:
- (void)addVideoInputFrontCamera:(BOOL)front {
NSArray *devices = [AVCaptureDevice devices];
AVCaptureDevice *frontCamera;
AVCaptureDevice *backCamera;
for (AVCaptureDevice *device in devices) {
//NSLog(@"Device name: %@", [device localizedName]);
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
//NSLog(@"Device position : back");
backCamera = device;
}
else {
//NSLog(@"Device position : front");
frontCamera = device;
}
}
}
NSError *error = nil;
if (front) {
AVCaptureDeviceInput *frontFacingCameraDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:frontCamera error:&error];
if (!error) {
if ([[self captureSession] …Run Code Online (Sandbox Code Playgroud) 因此,我按照Apple的说明使用AVCaptureSession以下方法捕获视频会话:http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.我面临的一个问题是,即使相机/ iPhone设备的方向是垂直的(并且AVCaptureVideoPreviewLayer显示垂直相机流),输出图像似乎处于横向模式.我检查了imageFromSampleBuffer:示例代码中imageBuffer的宽度和高度,分别得到640px和480px.有谁知道为什么会这样?
谢谢!
我知道打开闪光灯并在iPhone 4上保持打开的唯一方法是打开摄像机.我不太确定代码.这是我正在尝试的:
-(IBAction)turnTorchOn {
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if (videoInput) {
[captureSession addInput:videoInput];
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_current_queue()];
[captureSession addOutput:videoOutput];
[captureSession startRunning];
videoCaptureDevice.torchMode = AVCaptureTorchModeOn;
}
}
Run Code Online (Sandbox Code Playgroud)
有人知道这是否有效还是我错过了什么?(我还没有iPhone 4进行测试 - 只是尝试了一些新的API).
谢谢
我正在尝试将mi app升级到swift 4,但条形码阅读器无法正常工作.
我已经隔离了条形码阅读器代码,但仍然无法正常工作.相机可以正常工作但不会检测到条形码.
代码在swift 3 iOS 10上运行得很好.
这是完整的代码
import AVFoundation
import UIKit
class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
var captureSession: AVCaptureSession!
var previewLayer: AVCaptureVideoPreviewLayer!
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = UIColor.black
captureSession = AVCaptureSession()
let videoCaptureDevice = AVCaptureDevice.default(for: AVMediaType.video)
let videoInput: AVCaptureDeviceInput
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice!)
} catch {
return
}
if (captureSession.canAddInput(videoInput)) {
captureSession.addInput(videoInput)
} else {
failed();
return;
}
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.ean8, AVMetadataObject.ObjectType.ean13, …Run Code Online (Sandbox Code Playgroud) 这让我发疯,因为我无法让它发挥作用.我有以下场景:
我正在使用AVCaptureSession和AVCaptureVideoPreviewLayer创建自己的相机界面.界面显示一个矩形.以下是AVCaptureVideoPreviewLayer填满整个屏幕的内容.
我希望以某种方式裁剪捕获的图像,结果图像显示在显示屏上的rect中看到的内容.
我的设置如下:
_session = [[AVCaptureSession alloc] init];
AVCaptureSession *session = _session;
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (camera == nil) {
[self showImagePicker];
_isSetup = YES;
return;
}
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.frame = self.liveCapturePlaceholderView.bounds;
[self.liveCapturePlaceholderView.layer addSublayer:captureVideoPreviewLayer];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if (error) {
HGAlertViewWrapper *av = [[HGAlertViewWrapper alloc] initWithTitle:kFailedConnectingToCameraAlertViewTitle message:kFailedConnectingToCameraAlertViewMessage cancelButtonTitle:kFailedConnectingToCameraAlertViewCancelButtonTitle otherButtonTitles:@[kFailedConnectingToCameraAlertViewRetryButtonTitle]];
[av showWithBlock:^(NSString *buttonTitle){
if ([buttonTitle isEqualToString:kFailedConnectingToCameraAlertViewCancelButtonTitle]) { …Run Code Online (Sandbox Code Playgroud) 我有一个UIViewController在我使用的AVCaptureSession显示相机和它工作得很好,速度快.我UIButton在此摄像机视图的顶部放置了一个对象,并IBAction为该按钮添加了一个对象.
这就是它现在的样子:

现在我想在用户点击按钮时获取当前摄像机视图的图片:
@IBAction func takePicture(sender: AnyObject) {
// omg, what do do?!
}
Run Code Online (Sandbox Code Playgroud)
我不知道如何做到这一点.我想象可能有类似的东西:
let captureSession = AVCaptureSession()
var myDearPicture = captureSession.takePicture() as UIImage // something like it?
Run Code Online (Sandbox Code Playgroud)
控制器代码的完整链接在这里https://gist.github.com/rodrigoalvesvieira/392d683435ee29305059,希望它有帮助
我正在用Swift构建一个二维码扫描器,一切都在这方面有效.我遇到的问题是我试图使整个可见的一小部分区域AVCaptureVideoPreviewLayer能够扫描QR码.我发现,为了指定屏幕的哪个区域能够读取/捕获QR码,我必须使用AVCaptureMetadataOutput被调用的属性rectOfInterest.麻烦的是,当我将其分配给CGRect时,我无法扫描任何内容.在网上做了更多的研究后,我发现有些人建议我需要使用一个方法来调用metadataOutputRectOfInterestForRectCGRect,将其转换为属性rectOfInterest实际可以使用的正确格式.但是,metadataoutputRectOfInterestForRect我现在遇到的一个大问题是,当我使用这种方法时,我收到一个错误CGAffineTransformInvert: singular matrix.谁能告诉我为什么我会收到这个错误?我相信我根据Apple开发人员文档正确使用这种方法,我相信我需要根据我在网上找到的所有信息来实现我的目标.我将包括到目前为止我找到的文档的链接以及我用来扫描QR码的函数的代码示例
代码示例
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, …Run Code Online (Sandbox Code Playgroud) 我的应用程序目前正在使用AVFoundation从iPhone后置摄像头获取原始摄像头数据,并实时显示在AVCaptureVideoPreviewLayer上.
我的目标是有条不紊地将简单的图像过滤器应用于预览图层.图像未保存,因此我不需要捕获输出.例如,我想切换一个设置,将预览图层中的视频转换为Black&White.
我在这里发现了一个问题似乎通过在缓冲区中捕获单个视频帧,应用所需的转换,然后将每个帧显示为UIImage来实现类似的功能.由于多种原因,这似乎对我的项目来说太过分了,我想避免这可能导致的任何性能问题.
这是实现目标的唯一途径吗?
正如我所提到的,我不打算捕获任何AVCaptureSession的视频,只是预览它.
avcapturesession ×10
ios ×7
iphone ×3
avfoundation ×2
swift ×2
avcapture ×1
barcode ×1
camera ×1
cmtime ×1
front-camera ×1
ios11 ×1
orientation ×1
qr-code ×1
swift4 ×1
xcode9-beta ×1