我让我的AVCaptureSession工作,它几乎完美地复制了Camera.app用户界面,然而,几秒钟后应用程序将崩溃,我只是找不到我做错了什么.我真的希望有人知道如何优化这个!
我AM使用ARC; 而且,整个会话运行良好,但稍微崩溃了.AVCaptureSession委托方法被调用似乎每秒钟.如果只有当用户按下"拍照"按钮时才有办法调用该方法,那么如何在保持"实时"预览图层的同时执行此操作?
提前致谢!
设置会话
NSError *error = nil;
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[session addInput:input];
output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
if(version >= 4.0 && version < 5.0) {
output.minFrameDuration = CMTimeMake(1, 15);
}
output.alwaysDiscardsLateVideoFrames = YES;
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];
[self.view addSubview:camera_overlay];
[session …Run Code Online (Sandbox Code Playgroud) 我正在尝试为我的应用程序限制我的视频捕获帧速率,因为我发现它正在影响VoiceOver性能.
目前,它从摄像机捕获帧,然后尽快使用OpenGL例程处理帧.我想在捕获过程中设置一个特定的帧速率.
我希望能够通过使用videoMinFrameDuration或minFrameDuration来实现这一点,但这似乎对性能没有任何影响.有任何想法吗?
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
// SET SOME OTHER PROPERTIES
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Start capturing …Run Code Online (Sandbox Code Playgroud) 我创建了一个AVCaptureSession并将前置摄像头连接到它上面
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
}catch{print("err")}
Run Code Online (Sandbox Code Playgroud)
现在我想开始并停止在touche事件上录制.我该怎么做呢?
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
print("touch")
//Start Recording
}
override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) {
print("release");
//End Recording and Save
}
Run Code Online (Sandbox Code Playgroud) 我试图了解CMPixelBuffer使用 Metal操作 Video output( )的正确方法是什么。
据我了解有MTKView。CMPixelBuffer来自视频输出的每个都被分配给了Metal Texture. 所以最终的预览是来自MTKView?
当我在屏幕上看到最终结果时,是不是:
1)CMSampleBuffer->Metal->CMSampleBuffer
Run Code Online (Sandbox Code Playgroud)
或者
2)CMSampleBuffer->Metal->MTKView
Run Code Online (Sandbox Code Playgroud)
很困惑。有人可以把东西放在现场吗?
我的应用程序中有一个 AVCaptureVideoPreviewLayer,它运行良好,并显示与相机应用程序相同的预览视频。我想实现相机应用程序的 2 倍变焦功能。我该怎么做呢?
基本上,当您点击 1x 图标将其更改为 2x 时,我希望我的预览层将视频源更改为与您在相机应用程序中看到的比例相同的比例。
设置预览层
func startSession(){
captureSession = AVCaptureSession()
captureSession?.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// Catch error using the do catch block
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if (captureSession?.canAddInput(input) != nil){
captureSession?.addInput(input)
// Setup the preview layer
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
tempImageView.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
// Set up AVCaptureVideoDataOutput
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames …Run Code Online (Sandbox Code Playgroud) 我目前正在使用 AVFoundation 捕获图像。由于我想在 Vision Framework 工作流程中使用捕获的图像,因此在将其转换为 UIImage 时需要其方向。我怎样才能做到这一点?从文档中我发现 AVCapturePhoto 有一个 .metadata 字典来访问该信息,但如果我使用相应的密钥,结果为零。
这是我的捕获例程的委托方法:
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
// capture image finished
print("Image captured.")
print(photo)
print(photo.metadata["kCGImagePropertyOrientation"]) // CGImageProperties for metadata keys for value retrieval.
Run Code Online (Sandbox Code Playgroud)
我在“CGImageProperties>Individual Image Properties”下找到了该键。打印(照片)确实向我显示了实际上已捕获的图像,返回:
AVCapturePhoto:0x1c1013940 点:98386.095931 1/1 设置:uid:3 照片:{4032x3024 SIS:ON}
请用 Swift 回答。:)
谢谢!
我有一个允许用户拍照的视图控制器。我将 avcapture 边界设置为屏幕上视图的边界。
在此视图上方,我有一个集合视图。因此用户可以捕获多张图片,然后将它们添加到上面的集合视图中。
我在上面的预览中出现正确的方向时遇到了问题。
代码如下:
@IBOutlet weak var imagePreviews: UICollectionView!
@IBOutlet weak var imgPreview: UIView!
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var images: [UIImage] = [UIImage]()
var isLandscapeLeft : Bool = false
var isLandscapeRight : Bool = false
var isPortrait : Bool = false
var isPortraitUpsideDown: Bool = false
@IBAction func capture(_ sender: UIButton)
{
if let videoConnection = stillImageOutput?.connection(withMediaType: AVMediaTypeVideo)
{
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (sampleBuffer, error) in
if sampleBuffer != nil {
if let …Run Code Online (Sandbox Code Playgroud) 似乎AVSystemController_SystemVolumeDidChangeNotification每次启动AVCaptureSession时都会触发iPhone 5上的事件.
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(volumeChanged:) name:@"AVSystemController_SystemVolumeDidChangeNotification" object:nil];
Run Code Online (Sandbox Code Playgroud)
有谁知道如何解决这个问题?我正在使用这个观察者用音量按钮拍照(我知道它是一个私有API,但它与默认相机应用程序的功能相同,Apple通常视而不见......),但仅在iPhone 5上拍照每次相机启动时拍摄.
有没有办法检查iOS设备上是否支持具体的AVCaptureSessionPreset?我想设置AVCaptureSession的分辨率,但我不知道如何检查设备是否能够使用选定的分辨率捕获相机帧.
AVCaptureSession * _session;
NSString * _sessionPreset;
_sessionPreset = AVCaptureSessionPreset1920x1080;
// Here I would like to perform a check.
[_session setSessionPreset:_sessionPreset];
Run Code Online (Sandbox Code Playgroud) 我试图使用自定义视图拍摄照片(不使用UIImagePickerController),但每当我尝试拍照时,应用程序崩溃,并抛出此错误:
因未捕获的异常'NSInvalidArgumentException'而终止应用程序,原因:' - [AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] - 传递了非活动/无效连接.
这是我的takePhoto()函数,导致错误:
func takePhoto(sender: UIButton!){
var still: AVCaptureStillImageOutput = AVCaptureStillImageOutput()
var connection: AVCaptureConnection = AVCaptureConnection(inputPorts: self.input.ports, output: still)
if(connection.enabled){
still.captureStillImageAsynchronouslyFromConnection(connection, completionHandler: {(buffer: CMSampleBuffer!, error: NSError!) -> Void in
println("picture taken")//this never gets executed
})
}
}
Run Code Online (Sandbox Code Playgroud)
我也尝试将函数中的connection变量设置takePhoto()为:
self.output.connections[0] as AVCaptureConnection
Run Code Online (Sandbox Code Playgroud)
以及
(self.session.outputs[0] as AVCaptureOutput).connections[0] as AVCaptureConnection
Run Code Online (Sandbox Code Playgroud)
我得到了相同的结果.
在拍摄照片按钮的同一视图上,还有相机的实时预览(可以正常工作):
func setupCamera(){
self.session = AVCaptureSession()
self.session.sessionPreset = AVCaptureSessionPreset640x480
self.session.beginConfiguration()
self.session.commitConfiguration()
var error: NSError?
var devices: [AVCaptureDevice] …Run Code Online (Sandbox Code Playgroud) avcapturesession ×10
ios ×8
swift ×6
avcapture ×3
objective-c ×3
avfoundation ×2
iphone ×2
cgimage ×1
iphone-5 ×1
metal ×1
uiimage ×1
video ×1