IOS:照亮 ARKit 和 Vision 的流媒体

Bru*_*ann 5 brightness ios swift apple-vision arkit

我目前正在开发我的应用程序的一项功能,该功能可识别相机流中的人脸。我正在阅读具有里程碑意义的特征,例如嘴巴等。当光照条件充足时,一切正常。但在黑暗中,ARKit 和 Vision 都遇到了麻烦。有没有办法自动使流明亮度适应背景照明以保持功能?

研究表明,曝光时间是图像亮度的核心。因此,我尝试了一种适应曝光时间的功能。如果没有识别到​​人脸,图像变暗,曝光时间将增加 0.01。类似于本文中的函数。但要么没用,要么太亮了,脸都认不出来了。正因为如此,我尝试了自动版本,captureDevice.focusMode = .continuousAutoFocus但我没有注意到任何显着的改进。

这是我的代码:

视觉API

        let devices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .front).devices

        //2:  Select a capture device
        do {
            if let captureDevice = devices.first {
                let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)



                if(captureDevice.isLowLightBoostSupported){
                try captureDevice.lockForConfiguration()
                captureDevice.automaticallyEnablesLowLightBoostWhenAvailable = true
                captureDevice.unlockForConfiguration()
                }

                if(captureDevice.isExposureModeSupported(.continuousAutoExposure)){
                try captureDevice.lockForConfiguration()
                captureDevice.exposureMode = .continuousAutoExposure
                captureDevice.unlockForConfiguration()
                }


                if(captureDevice.isFocusModeSupported(.continuousAutoFocus)) {
                    try! captureDevice.lockForConfiguration()
                    captureDevice.focusMode = .continuousAutoFocus
                }

                try! captureDevice.lockForConfiguration()
                captureDevice.automaticallyAdjustsVideoHDREnabled = true
                captureDevice.unlockForConfiguration()


                avSession.addInput(captureDeviceInput)
            }

        } catch {
            print(error.localizedDescription)
        }

        let captureOutput = AVCaptureVideoDataOutput()
        captureOutput.setSampleBufferDelegate(self as AVCaptureVideoDataOutputSampleBufferDelegate, queue: DispatchQueue(label: "videoQueue"))
        avSession.addOutput(captureOutput)
Run Code Online (Sandbox Code Playgroud)

ARKit

func renderer(
        _ renderer: SCNSceneRenderer,
        didUpdate node: SCNNode,
        for anchor: ARAnchor) {
        guard let faceAnchor = anchor as? ARFaceAnchor,
            let faceGeometry = node.geometry as? ARSCNFaceGeometry else {
                return
        }



        node.camera?.bloomThreshold = 1


        node.camera?.wantsHDR = true
        node.camera?.wantsExposureAdaptation = true
        node.camera?.exposureAdaptationBrighteningSpeedFactor = 0.2

        node.focusBehavior = .focusable
        faceGeometry.update(from: faceAnchor.geometry)
        expression(anchor: faceAnchor)

...
}
Run Code Online (Sandbox Code Playgroud)