Sac*_*cha 5 objective-c core-audio ios avassetwriter avasset
我正在尝试使用AVAsset和AVAssetWriter在iOS中反转音频.以下代码正常,但输出文件比输入短.例如,输入文件的持续时间为1:59,但输出1:50且音频内容相同.
- (void)reverse:(AVAsset *)asset
{
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
forKey:AVFormatIDKey];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:128000], AVEncoderBitRateKey,
[NSData data], AVChannelLayoutKey,
nil];
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"out.m4a"];
NSURL *exportURL = [NSURL fileURLWithPath:exportPath];
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:exportURL
fileType:AVFileTypeAppleM4A
error:&writerError];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];
while (sample != NULL) {
sample = [readerOutput copyNextSampleBuffer];
if (sample == NULL)
continue;
[samples addObject:(__bridge id)(sample)];
CFRelease(sample);
}
NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects];
for (id reversedSample in reversedSamples) {
if (writerInput.readyForMoreMediaData) {
[writerInput appendSampleBuffer:(__bridge CMSampleBufferRef)(reversedSample)];
}
else {
[NSThread sleepForTimeInterval:0.05];
}
}
[writerInput markAsFinished];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
[writer finishWriting];
});
}
Run Code Online (Sandbox Code Playgroud)
更新:
如果我在第一个while循环中直接写样本- 一切正常(即使writerInput.readyForMoreMediaData检查).在这种情况下,结果文件与原始文件的持续时间完全相同.但如果我从反转中写出相同的样本NSArray- 结果会更短.
以相反的顺序写入音频样本是不够的。样本数据需要自己反转,其时序信息需要正确设置。
在 Swift 中,我们为 AVAsset 创建了一个扩展。
样本必须作为解压缩样本进行处理。为此,使用 kAudioFormatLinearPCM 创建音频阅读器设置:
let kAudioReaderSettings = [
AVFormatIDKey: Int(kAudioFormatLinearPCM) as AnyObject,
AVLinearPCMBitDepthKey: 16 as AnyObject,
AVLinearPCMIsBigEndianKey: false as AnyObject,
AVLinearPCMIsFloatKey: false as AnyObject,
AVLinearPCMIsNonInterleaved: false as AnyObject]
Run Code Online (Sandbox Code Playgroud)
使用我们的 AVAsset 扩展方法 audioReader:
func audioReader(outputSettings: [String : Any]?) -> (audioTrack:AVAssetTrack?, audioReader:AVAssetReader?, audioReaderOutput:AVAssetReaderTrackOutput?) {
if let audioTrack = self.tracks(withMediaType: .audio).first {
if let audioReader = try? AVAssetReader(asset: self) {
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
return (audioTrack, audioReader, audioReaderOutput)
}
}
return (nil, nil, nil)
}
let (_, audioReader, audioReaderOutput) = self.audioReader(outputSettings: kAudioReaderSettings)
Run Code Online (Sandbox Code Playgroud)
创建用于读取音频样本的 audioReader (AVAssetReader) 和 audioReaderOutput (AVAssetReaderTrackOutput)。
我们需要跟踪音频样本和新的时间信息:
var audioSamples:[CMSampleBuffer] = []
var timingInfos:[CMSampleTimingInfo] = []
Run Code Online (Sandbox Code Playgroud)
现在开始阅读样本。对于每个音频样本,获取其计时信息以生成新的计时信息,这些信息将与音轨的末尾相关(因为我们将以相反的顺序将其写回)。
换句话说,我们将调整样本的呈现时间。
if audioReader.startReading() {
while audioReader.status == .reading {
if let sampleBuffer = audioReaderOutput.copyNextSampleBuffer(){
// process sample
}
}
}
Run Code Online (Sandbox Code Playgroud)
所以为了“处理样本”,我们使用 CMSampleBufferGetSampleTimingInfoArray 来获取timingInfo(CMSampleTimingInfo):
var timingInfo = CMSampleTimingInfo()
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: &timingInfo, entriesNeededOut: &timingInfoCount)
Run Code Online (Sandbox Code Playgroud)
获取演示时间和持续时间:
let presentationTime = timingInfo.presentationTimeStamp
let duration = CMSampleBufferGetDuration(sampleBuffer)
Run Code Online (Sandbox Code Playgroud)
计算样本的结束时间:
let endTime = CMTimeAdd(presentationTime, duration)
Run Code Online (Sandbox Code Playgroud)
现在计算相对于曲目结束的新呈现时间:
let newPresentationTime = CMTimeSubtract(self.duration, endTime)
Run Code Online (Sandbox Code Playgroud)
并使用它来设置timingInfo:
timingInfo.presentationTimeStamp = newPresentationTime
Run Code Online (Sandbox Code Playgroud)
最后保存音频样本缓冲区及其时序信息,我们稍后在创建反向样本时需要它:
timingInfos.append(timingInfo)
audioSamples.append(sampleBuffer)
Run Code Online (Sandbox Code Playgroud)
我们需要一个 AVAssetWriter:
guard let assetWriter = try? AVAssetWriter(outputURL: destinationURL, fileType: AVFileType.wav) else {
// error handling
return
}
Run Code Online (Sandbox Code Playgroud)
文件类型为“wav”,因为反向采样将被写入为未压缩的音频格式 Linear PCM,如下所示。
对于 assetWriter,我们指定音频压缩设置和“源格式提示”,并且可以从未压缩的样本缓冲区中获取它:
let sampleBuffer = audioSamples[0]
let sourceFormat = CMSampleBufferGetFormatDescription(sampleBuffer)
let audioCompressionSettings = [AVFormatIDKey: kAudioFormatLinearPCM] as [String : Any]
Run Code Online (Sandbox Code Playgroud)
现在我们可以创建 AVAssetWriterInput,将其添加到 writer 并开始编写:
let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings:audioCompressionSettings, sourceFormatHint: sourceFormat)
assetWriter.add(assetWriterInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
Run Code Online (Sandbox Code Playgroud)
现在以相反的顺序遍历样本,并为每个反转样本本身。
我们有一个 CMSampleBuffer 的扩展,它就是这样做的,称为“反向”。
使用 requestMediaDataWhenReady 我们按如下方式执行此操作:
let nbrSamples = audioSamples.count
var index = 0
let serialQueue: DispatchQueue = DispatchQueue(label: "com.limit-point.reverse-audio-queue")
assetWriterInput.requestMediaDataWhenReady(on: serialQueue) {
while assetWriterInput.isReadyForMoreMediaData, index < nbrSamples {
let sampleBuffer = audioSamples[nbrSamples - 1 - index]
let timingInfo = timingInfos[index]
if let reversedBuffer = sampleBuffer.reverse(timingInfo: [timingInfo]), assetWriterInput.append(reversedBuffer) == true {
index += 1
}
else {
index = nbrSamples
}
if index == nbrSamples {
assetWriterInput.markAsFinished()
finishWriting() // call assetWriter.finishWriting, check assetWriter status, etc.
}
}
}
Run Code Online (Sandbox Code Playgroud)
所以最后要解释的是如何在“反向”方法中反转音频样本?
我们为 CMSampleBuffer 创建了一个扩展,它接受一个样本缓冲区并返回正确定时的反向样本缓冲区,作为 CMSampleBuffer 的扩展:
func reverse(timingInfo:[CMSampleTimingInfo]) -> CMSampleBuffer?
Run Code Online (Sandbox Code Playgroud)
需要反转的数据需要使用以下方法获取:
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer
Run Code Online (Sandbox Code Playgroud)
CMSampleBuffer 头文件对这种方法的描述如下:
“创建一个包含来自 CMSampleBuffer 的数据的 AudioBufferList,以及一个引用(并管理其生命周期)该 AudioBufferList 中数据的 CMBlockBuffer。”
如下调用它,其中“self”指的是我们正在反转的 CMSampleBuffer,因为这是一个扩展:
var blockBuffer: CMBlockBuffer? = nil
let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
self,
bufferListSizeNeededOut: nil,
bufferListOut: audioBufferList.unsafeMutablePointer,
bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
)
Run Code Online (Sandbox Code Playgroud)
现在您可以通过以下方式访问原始数据:
let data: UnsafeMutableRawPointer = audioBufferList.unsafePointer.pointee.mBuffers.mData
Run Code Online (Sandbox Code Playgroud)
反转数据我们需要访问数据作为一个名为 sampleArray 的“样本”数组,并在 Swift 中按如下方式完成:
let samples = data.assumingMemoryBound(to: Int16.self)
let sizeofInt16 = MemoryLayout<Int16>.size
let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize
let dataCount = Int(dataSize) / sizeofInt16
var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
Run Code Online (Sandbox Code Playgroud)
现在反转数组 sampleArray:
sampleArray.reverse()
Run Code Online (Sandbox Code Playgroud)
使用反向样本,我们需要创建一个新的 CMSampleBuffer,其中包含反向样本和我们之前在从源文件中读取音频样本时生成的新时序信息。
现在我们用 CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer 替换我们之前获得的 CMBlockBuffer 中的数据:
首先使用反向数组重新分配“样本”:
var status:OSStatus = noErr
sampleArray.withUnsafeBytes { sampleArrayPtr in
if let baseAddress = sampleArrayPtr.baseAddress {
let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
let rawPtr = UnsafeRawPointer(bufferPointer)
status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
}
}
if status != noErr {
return nil
}
Run Code Online (Sandbox Code Playgroud)
最后使用 CMSampleBufferCreate 创建新的样本缓冲区。该函数需要我们可以从原始样本缓冲区中获取的两个参数,即 formatDescription 和 numberOfSamples:
let formatDescription = CMSampleBufferGetFormatDescription(self)
let numberOfSamples = CMSampleBufferGetNumSamples(self)
var newBuffer:CMSampleBuffer?
Run Code Online (Sandbox Code Playgroud)
现在使用反向块缓冲区创建新的样本缓冲区,最值得注意的是作为参数传递给我们正在定义的函数“reverse”的新计时信息:
guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: timingInfo.count, sampleTimingArray: timingInfo, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
return self
}
return newBuffer
Run Code Online (Sandbox Code Playgroud)
这就是全部!
最后要注意的是,Core Audio 和 AVFoundation 标头提供了许多有用的信息,例如 CoreAudioTypes.h、CMSampleBuffer.h 等等。
以样本数的形式打印出每个缓冲区的大小(通过“读取”readerOuput while 循环),并在“写入”writerInput for 循环中重复。这样您就可以看到所有缓冲区大小并查看它们是否相加。
例如,如果您丢失或跳过缓冲区if (writerInput.readyForMoreMediaData)为 false,您将“睡眠”,但随后继续处理 revedSamples 中的下一个 revedSample(该缓冲区实际上已从 writerInput 中删除)
更新(基于评论):我在代码中发现,有两个问题:
[NSNumber numberWithInt:1], AVNumberOfChannelsKey。查看输出和输入文件的信息:

size_t sampleSize = CMSampleBufferGetNumSamples(sample);输出看起来像:
2015-03-19 22:26:28.171 audioReverse[25012:4901250] Reading [0]: 8192
2015-03-19 22:26:28.172 audioReverse[25012:4901250] Reading [1]: 8192
...
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [640]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [641]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [642]: 5056
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Writing [0]: 5056
2015-03-19 22:26:28.652 audioReverse[25012:4901250] Writing [1]: 8192
...
2015-03-19 22:26:29.134 audioReverse[25012:4901250] Writing [640]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [641]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [642]: 8192
Run Code Online (Sandbox Code Playgroud)
这表明您正在颠倒 8192 个样本的每个缓冲区的顺序,但在每个缓冲区中,音频仍然“面向前”。我们可以在我拍摄的正确反转(逐个样本)与缓冲区反转的屏幕截图中看到这一点:

我认为如果您还反转每个 8192 缓冲区的每个样本,您当前的方案就可以工作。我个人不建议使用 NSArray 枚举器进行信号处理,但如果您在样本级别操作,它可以工作。