AVFoundation使用自定义分辨率捕获视频

ain*_*oss 9 macos camera avfoundation swift

我在OS X上编写应用程序,它将从相机捕获帧.是否可以使用AVCaptureDevice.activeFormat属性设置捕获设置?我试过这个,但它不起作用(会话预设覆盖它).

我发现在IOS上可以将SessionPreset设置AVCaptureSessionAVCaptureSessionPresetInputPriority.

主要目的是选择比预设更详细的视频分辨率.

ARG*_*Geo 10

在macOS中(与iOS不同),捕获会话可以在您进行更改后自动配置捕获格式.防止自动更改捕获格式的使用lockForConfiguration()方法.然后调用beginConfiguration()方法,设置属性(例如,从一打中选择一个预设AVCaptureSessionPresetiFrame960x540),然后调用commitConfiguration()方法.最后,您需要unlockForConfiguration()在更改设备属性后放置.

或者按照以下步骤操作

  1. 调用lockForConfiguration()以获取对设备配置属性的独占访问权限.

  2. 更改设备的activeFormat属性(如上文和下文所述).

  3. 使用会话的startRunning()方法开始捕获.

  4. unlockForConfiguration().解锁设备.

startRunning()stopRunning()方法必须被调用以分别启动和停止数据从输入的流的输出.

您还必须lockForConfiguration()在调用AVCaptureSession方法之前调用startRunning(),否则会话的预设将覆盖捕获设备上选定的活动格式.

但是,如果要求设备属性保持不变,则可以保留锁定而不释放该锁定.

以下是开发人员文档lockForConfiguration()中的详细信息.

如果您尝试将活动格式设置为可访问格式中不存在的格式,则会抛出invalidArgumentException.

在此输入图像描述

此外,还有一个解释如何更改属性:macOS AVFoundation视频捕获

在AVCaptureDevice中有两个属性.格式和activeFormat.format将返回AVCaptureDeviceFormat包含cam公开的所有格式的NSArrary .您从此列表中选择任何一种格式并将其设置为activeFormat.通过调用AVCaptureDevice确保在收到对devlce的独占访问权限后设置格式lockForConfigration.设置格式后,使用AVCaptureDevice释放锁定unlockForConfigration.然后启动AVCaptureSession,它将为您提供所设置格式的视频帧.

AVCaptureFormat是CMFormatDescription的包装器.CMVideoFotmatDescription是CMFormatDescription的concreete子类.使用CMVideoFormatDescriptionGetDimentions()得到的定格的宽度和高度.使用CMFormatDescriptionGetMediaSubType()获得的视频编解码器.对于原始照片,视频编解码器主要是yuvs或vuy2.对于压缩格式,它的h264,dmb1(mjpeg)等等.

这是用Swift 4.1编写的macOS示例:

import Cocoa
import AVFoundation

class ViewController: NSViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

    override func viewDidLoad() {
        super.viewDidLoad()
        setupCameraSession()
    }   
    override func viewDidAppear() {
        super.viewDidAppear()
        view.layer?.addSublayer(previewLayer)
        cameraSession.startRunning()
    }   
    lazy var cameraSession: AVCaptureSession = {
        let session = AVCaptureSession()
        session.sessionPreset = AVCaptureSession.Preset.iFrame960x540
        return session
    }()    
    lazy var previewLayer: AVCaptureVideoPreviewLayer = {
        let preview =  AVCaptureVideoPreviewLayer(session: self.cameraSession)
        preview.bounds = CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height)
        preview.position = CGPoint(x: self.view.bounds.midX, y: self.view.bounds.midY)
        preview.videoGravity = AVLayerVideoGravity.resize
        return preview
    }()

    func setupCameraSession() {
        let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)

        do {
            let deviceInput = try AVCaptureDeviceInput(device: captureDevice!)

            guard let camera = AVCaptureDevice.default(for: .video) else {
                return
            }
            // acquire exclusive access to the device’s properties
            try camera.lockForConfiguration()            
            cameraSession.beginConfiguration()

            camera.focusMode = .continuousAutoFocus
            camera.flashMode = .on
            camera.whiteBalanceMode = .continuousAutoWhiteBalance

            if (cameraSession.canAddInput(deviceInput) == true) {
                cameraSession.addInput(deviceInput)
            }

            let dataOutput = AVCaptureVideoDataOutput()
            dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)] as [String : Any]
            dataOutput.alwaysDiscardsLateVideoFrames = true

            if (cameraSession.canAddOutput(dataOutput) == true) {
                cameraSession.addOutput(dataOutput)
            }

            cameraSession.commitConfiguration()
            camera.unlockForConfiguration()

            let queue = DispatchQueue(label: "com.blah-blah.yourLabel")
            dataOutput.setSampleBufferDelegate(self, queue: queue)

        } catch let error as NSError {
            NSLog("\(error), \(error.localizedDescription)")
        }
    }

    func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // Here you collect each frame and process it
    }    
    func captureOutput(_ captureOutput: AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // Here you can count how many frames are dropped
    }
}
Run Code Online (Sandbox Code Playgroud)

  • 好的,我找到了一种让它发挥作用的方法。如果我明确地*不*调用unlockForConfigration,但确实调用lockForConfigration,它就会起作用。这与文档所说的相反。 (2认同)
  • 我做了一些更多的实验......看来关键是在 AVSession.startRunning() 之后调用 lockForConfiguration() 。然后你可以调用unlockForConfiguration(),而不会丢弃你选择的配置/格式。这完全违背了逻辑预期,我认为大多数基于设备的 API(也许是全部?)都会在启动 IO 会话之前选择一个配置。 (2认同)