Ath*_*dya 8 c++ audio macos objective-c core-audio
我有一个AudioInputIOProc我得到的AudioBufferList.我需要将其转换AudioBufferList为CMSampleBufferRef.
这是我到目前为止编写的代码:
- (void)handleAudioSamples:(const AudioBufferList*)samples numSamples:(UInt32)numSamples hostTime:(UInt64)hostTime {
// Create a CMSampleBufferRef from the list of samples, which we'll own
AudioStreamBasicDescription monoStreamFormat;
memset(&monoStreamFormat, 0, sizeof(monoStreamFormat));
monoStreamFormat.mSampleRate = 44100;
monoStreamFormat.mFormatID = kAudioFormatMPEG4AAC;
monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
monoStreamFormat.mBytesPerPacket = 4;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mBytesPerFrame = 4;
monoStreamFormat.mChannelsPerFrame = 2;
monoStreamFormat.mBitsPerChannel = 16;
CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0, NULL, 0, NULL, NULL, &format);
if (status != noErr) {
// really shouldn't happen
return;
}
mach_timebase_info_data_t tinfo;
mach_timebase_info(&tinfo);
UInt64 _hostTimeToNSFactor = (double)tinfo.numer / tinfo.denom;
uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
if (status != noErr) {
// couldn't create the sample buffer
NSLog(@"Failed to create sample buffer");
CFRelease(format);
return;
}
// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
if (status != noErr) {
NSLog(@"Failed to add samples to sample buffer");
CFRelease(sampleBuffer);
CFRelease(format);
NSLog(@"Error status code: %d", status);
return;
}
[self addAudioFrame:sampleBuffer];
NSLog(@"Original sample buf size: %ld for %d samples from %d buffers, first buffer has size %d", CMSampleBufferGetTotalSampleSize(sampleBuffer), numSamples, samples->mNumberBuffers, samples->mBuffers[0].mDataByteSize);
NSLog(@"Original sample buf has %ld samples", CMSampleBufferGetNumSamples(sampleBuffer));
}
Run Code Online (Sandbox Code Playgroud)
现在,我不确定如何在给定AudioInputIOProc的此函数定义的情况下计算numSamples:
OSStatus AudioTee::InputIOProc(AudioDeviceID inDevice, const AudioTimeStamp *inNow, const AudioBufferList *inInputData, const AudioTimeStamp *inInputTime, AudioBufferList *outOutputData, const AudioTimeStamp *inOutputTime, void *inClientData)
Run Code Online (Sandbox Code Playgroud)
此定义存在于WavTap的AudioTee.cpp文件中.
当我尝试调用时,我得到的CMSampleBufferError_RequiredParameterMissing错误是错误代码错误.-12731CMSampleBufferSetDataBufferFromAudioBufferList
更新:
为了澄清这个问题,以下是我从AudioDeviceIOProc获得的音频数据的格式:
Channels: 2, Sample Rate: 44100, Precision: 32-bit, Sample Encoding: 32-bit Signed Integer PCM, Endian Type: little, Reverse Nibbles: no, Reverse Bits: no
我得到的AudioBufferList*包含我需要转换为CMSampleBufferRef*的所有音频数据(30秒的视频),并将这些样本缓冲区添加到通过磁盘写入磁盘的视频(30秒长)AVAssetWriterInput.
三件事看起来不对:
您声明格式 ID 为kAudioFormatMPEG4AAC,但将其配置为 LPCM。所以尝试一下
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
当配置为立体声时,您也可以将其称为“单声道”。
为什么使用mach_timebase_info它可能会在音频演示时间戳中留下间隙?使用样本计数代替:
CMTime presentationTime = CMTimeMake(numSamplesProcessed, 44100);
你CMSampleTimingInfo看起来不对,而且你没有使用presentationTime. 您将缓冲区的持续时间设置为 1 个样本长(如果可以的话),numSamples并将其呈现时间设置为零,这是不正确的。像这样的事情会更有意义:
CMSampleTimingInfo timing = { CMTimeMake(numSamples, 44100), presentationTime, kCMTimeInvalid };
还有一些问题:
你AudioBufferList有预期的2吗AudioBuffers?你有这个的可运行版本吗?
ps 我自己也有罪,但是在音频线程上分配内存在音频开发中被认为是有害的。
| 归档时间: |
|
| 查看次数: |
249 次 |
| 最近记录: |