在Swift中使用AVAssetWriter将AVCaptureVideoDataOutput保存到电影文件

Har*_*143 6 xcode avfoundation avassetwriter swift swift4

我一直在网上浏览所有内容,但似乎找不到所需的教程或帮助。

使用AVFoundation和Dlib库,我创建了一个应用程序,可以使用手机上的前置摄像头从实时视频中检测人脸。我正在使用Shape Predictor 68 Face Landmarks 执行此操作。为此,我敢肯定我必须使用AVCaptureVideoDataOutputAVMovieFileOutput而不是AVMovieFileOutput,以便可以分析每个帧。

现在,我希望能够将视频保存到文件中,并根据收集的信息AVAssetWriter来执行此操作。我只是在任何地方都找不到太多有关如何开始使用此方法的信息。我对Swift和iOS编程是完全陌生的,并且从Apple文档中不能真正了解很多。

如果有人可以帮助我,将不胜感激!

Har*_*143 12

我能够找出如何使用AVAssetWriter。万一其他人需要帮助,我使用的代码如下:

func setUpWriter() {

    do {
        outputFileLocation = videoFileLocation()
        videoWriter = try AVAssetWriter(outputURL: outputFileLocation!, fileType: AVFileType.mov)

        // add video input
        videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: [
            AVVideoCodecKey : AVVideoCodecType.h264,
            AVVideoWidthKey : 720,
            AVVideoHeightKey : 1280,
            AVVideoCompressionPropertiesKey : [
                AVVideoAverageBitRateKey : 2300000,
                ],
            ])

        videoWriterInput.expectsMediaDataInRealTime = true

        if videoWriter.canAdd(videoWriterInput) {
            videoWriter.add(videoWriterInput)
            print("video input added")
        } else {
            print("no input added")
        }

        // add audio input
        audioWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: nil)

        audioWriterInput.expectsMediaDataInRealTime = true

        if videoWriter.canAdd(audioWriterInput!) {
            videoWriter.add(audioWriterInput!)
            print("audio input added")
        }


        videoWriter.startWriting()
    } catch let error {
        debugPrint(error.localizedDescription)
    }


}

func canWrite() -> Bool {
    return isRecording && videoWriter != nil && videoWriter?.status == .writing
}


 //video file location method
func videoFileLocation() -> URL {
    let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString
    let videoOutputUrl = URL(fileURLWithPath: documentsPath.appendingPathComponent("videoFile")).appendingPathExtension("mov")
    do {
    if FileManager.default.fileExists(atPath: videoOutputUrl.path) {
        try FileManager.default.removeItem(at: videoOutputUrl)
        print("file removed")
    }
    } catch {
        print(error)
    }

    return videoOutputUrl
}

// MARK: AVCaptureVideoDataOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    let writable = canWrite()

    if writable,
        sessionAtSourceTime == nil {
        // start writing
        sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
        videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
        //print("Writing")
    }

    if output == videoDataOutput {
        connection.videoOrientation = .portrait

        if connection.isVideoMirroringSupported {
            connection.isVideoMirrored = true
        }
    }

    if writable,
        output == videoDataOutput,
        (videoWriterInput.isReadyForMoreMediaData) {
        // write video buffer
        videoWriterInput.append(sampleBuffer)
        //print("video buffering")
    } else if writable,
        output == audioDataOutput,
        (audioWriterInput.isReadyForMoreMediaData) {
        // write audio buffer
        audioWriterInput?.append(sampleBuffer)
        //print("audio buffering")
    }

}

// MARK: Start recording
func start() {
    guard !isRecording else { return }
    isRecording = true
    sessionAtSourceTime = nil
    setUpWriter()
    print(isRecording)
    print(videoWriter)
    if videoWriter.status == .writing {
        print("status writing")
    } else if videoWriter.status == .failed {
        print("status failed")
    } else if videoWriter.status == .cancelled {
        print("status cancelled")
    } else if videoWriter.status == .unknown {
        print("status unknown")
    } else {
        print("status completed")
    }

}

// MARK: Stop recording
func stop() {
    guard isRecording else { return }
    isRecording = false
    videoWriterInput.markAsFinished()
    print("marked as finished")
    videoWriter.finishWriting { [weak self] in
        self?.sessionAtSourceTime = nil
    }
    //print("finished writing \(self.outputFileLocation)")
    captureSession.stopRunning()
    performSegue(withIdentifier: "videoPreview", sender: nil)
}
Run Code Online (Sandbox Code Playgroud)

我现在遇到另一个问题,当我一起使用AVCaptureMetadataOutput,AVCaptureVideoDataOutput和AVCaptureAudioDataOutput时,此解决方案不起作用。当我添加AVCaptureAudioDataOutput时,应用程序崩溃。

  • 我现在已经解决了这个问题。我只需要在 captureOutput 函数中声明只有当捕获输出是 videoData 时才应该进行人脸检测。例如:if output == videoDataOutput { doFaceDetection} 在我没有这个 if 子句之前,audioDataOutput 会干扰人脸检测。 (2认同)