Rol*_*ien 11 ipad ios avcapturesession avcapturemoviefileoutput
我正在用AVCaptureSession和AVCaptureMovieFileOutput录制一部电影.我还记录加速度数据并尝试将加速度数据与视频对齐.
我想找到一种方法来获得视频文件录制开始的时间.我正在做以下事情:
currentDate = [NSDate date];
[output startRecordingToOutputFileURL:fileUrl recordingDelegate:self];
Run Code Online (Sandbox Code Playgroud)
但是,根据我的测试,视频录制在调用startRecordingToOutputFileURL之前0.12秒开始.我假设这是因为各种视频缓冲区已经充满了添加到文件中的数据.
反正有没有得到视频第一帧的实际NSDate?
我有同样的问题,我终于找到了答案。我将在下面编写所有代码,但我正在寻找的缺失部分是:
self.captureSession.masterClock!.time
Run Code Online (Sandbox Code Playgroud)
captureSession 中的 masterClock 是每个缓冲区的相对时间基于 ( presentationTimeStamp
) 的时钟。
完整代码和解释
您要做的第一件事是将 转换AVCaptureMovieFileOutput
为AVCaptureVideoDataOutput
和AVCaptureAudioDataOutput
。因此,请确保您的类实现了AVCaptureVideoDataOutputSampleBufferDelegate
和AVCaptureAudioDataOutputSampleBufferDelegate
。它们共享相同的功能,因此将其添加到您的类中(我将在稍后介绍实现):
let videoDataOutput = AVCaptureVideoDataOutput()
let audioDataOutput = AVCaptureAudioDataOutput()
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// I will get to this
}
Run Code Online (Sandbox Code Playgroud)
在捕获会话中添加输出,我的代码如下所示(如果需要,您可以更改 videoOrientation 和其他内容)
if captureSession.canAddInput(cameraInput)
&& captureSession.canAddInput(micInput)
// && captureSession.canAddOutput(self.movieFileOutput)
&& captureSession.canAddOutput(self.videoDataOutput)
&& captureSession.canAddOutput(self.audioDataOutput)
{
captureSession.beginConfiguration()
captureSession.addInput(cameraInput)
captureSession.addInput(micInput)
// self.captureSession.addOutput(self.movieFileOutput)
let videoAudioDataOutputQueue = DispatchQueue(label: "com.myapp.queue.video-audio-data-output") //Choose any label you want
self.videoDataOutput.alwaysDiscardsLateVideoFrames = false
self.videoDataOutput.setSampleBufferDelegate(self, queue: videoAudioDataOutputQueue)
self.captureSession.addOutput(self.videoDataOutput)
self.audioDataOutput.setSampleBufferDelegate(self, queue: videoAudioDataOutputQueue)
self.captureSession.addOutput(self.audioDataOutput)
if let connection = self.videoDataOutput.connection(with: .video) {
if connection.isVideoStabilizationSupported {
connection.preferredVideoStabilizationMode = .auto
}
if connection.isVideoOrientationSupported {
connection.videoOrientation = .portrait
}
}
self.captureSession.commitConfiguration()
DispatchQueue.global(qos: .userInitiated).async {
self.captureSession.startRunning()
}
}
Run Code Online (Sandbox Code Playgroud)
要像编写视频一样AVCaptureMovieFileOutput
,您可以使用AVAssetWriter
. 因此,将以下内容添加到您的班级中:
var videoWriter: AVAssetWriter?
var videoWriterInput: AVAssetWriterInput?
var audioWriterInput: AVAssetWriterInput?
private func setupWriter(url: URL) {
self.videoWriter = try! AVAssetWriter(outputURL: url, fileType: AVFileType.mov)
self.videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: self.videoDataOutput.recommendedVideoSettingsForAssetWriter(writingTo: AVFileType.mov))
self.videoWriterInput!.expectsMediaDataInRealTime = true
self.videoWriter!.add(self.videoWriterInput!)
self.audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: self.audioDataOutput.recommendedAudioSettingsForAssetWriter(writingTo: AVFileType.mov))
self.audioWriterInput!.expectsMediaDataInRealTime = true
self.videoWriter!.add(self.audioWriterInput!)
self.videoWriter!.startWriting()
}
Run Code Online (Sandbox Code Playgroud)
每次想要录制时,首先需要设置 writer。该startWriting
函数实际上并不开始写入文件,而是让写入者做好准备,即将写入某些内容。
下一个代码我们将添加开始或停止录制的代码。但请注意,我仍然需要修复 stopRecording。stopRecording 实际上很快就完成了录制,因为缓冲区总是延迟的。但也许这对你来说并不重要。
var isRecording = false
var recordFromTime: CMTime?
var sessionAtSourceTime: CMTime?
func startRecording(url: URL) {
guard !self.isRecording else { return }
self.isRecording = true
self.sessionAtSourceTime = nil
self.recordFromTime = self.captureSession.masterClock!.time //This is very important, because based on this time we will start recording appropriately
self.setupWriter(url: url)
//You can let a delegate or something know recording has started now
}
func stopRecording() {
guard self.isRecording else { return }
self.isRecording = false
self.videoWriter?.finishWriting { [weak self] in
self?.sessionAtSourceTime = nil
guard let url = self?.videoWriter?.outputURL else { return }
//Notify finished recording and pass url if needed
}
}
Run Code Online (Sandbox Code Playgroud)
最后是我们在本文开头提到的函数的实现:
private func canWrite() -> Bool {
return self.isRecording && self.videoWriter != nil && self.videoWriter!.status == .writing
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard CMSampleBufferDataIsReady(sampleBuffer), self.canWrite() else { return }
//sessionAtSourceTime is the first buffer we will write to the file
if self.sessionAtSourceTime == nil {
//Make sure we start by capturing the videoDataOutput (if we start with the audio the file gets corrupted)
guard output == self.videoDataOutput else { return }
//Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time)
guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return }
self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp
self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp)
}
if output == self.videoDataOutput {
if self.videoWriterInput!.isReadyForMoreMediaData {
self.videoWriterInput!.append(sampleBuffer)
}
} else if output == self.audioDataOutput {
if self.audioWriterInput!.isReadyForMoreMediaData {
self.audioWriterInput!.append(sampleBuffer)
}
}
}
Run Code Online (Sandbox Code Playgroud)
所以解决开始录制和自己的代码时差的最重要的就是self.captureSession.masterClock!.time
. 我们查看缓冲区相对时间,直到达到您开始录制的时间。如果您还想修复结束时间,只需添加一个变量recordUntilTime
并检查 didOutput SampleBuffer 方法中是否存在。
归档时间: |
|
查看次数: |
1134 次 |
最近记录: |