coc*_*her 8 iphone audio volume audioqueueservices audiotoolbox
我想从应用程序包中读取声音文件,复制它,播放其最大音量级别(增益值或峰值功率,我不确定它的技术名称),然后将其作为另一个文件写入捆绑包再次.
我做了复制和写作部分.生成的文件与输入文件相同.我在AudioToolbox框架中使用AudioFile服务的AudioFileReadBytes()和AudioFileWriteBytes()函数来做到这一点.
所以,我有输入文件的字节以及它的音频数据格式(通过使用带有kAudioFilePropertyDataFormat的AudioFileGetProperty())但是我找不到这些中的变量来播放原始文件的最大音量级别.
为了澄清我的目的,我正在尝试生成另一个声音级别相对于原始级别增加或减少的声音文件,所以我不关心系统的音量级别,这是由用户或iOS设置的.
这可能与我提到的框架有关吗?如果没有,还有其他建议吗?
谢谢
编辑:通过Sam关于某些音频基础知识的答案,我决定用另一种方法扩展问题.
我可以使用AudioQueue服务将现有声音文件(在捆绑中)录制到另一个文件,并在录制阶段播放音量级别(借助框架)吗?
更新:这是我如何读取输入文件和写入输出.下面的代码降低了"一些"幅度值的声级,但噪声很大.有趣的是,如果我选择0.5作为幅度值,它会增加声级而不是降低它,但是当我使用0.1作为幅度值时,它会降低声音.两种情况都涉及令人不安的噪音.我认为这就是为什么Art正在谈论规范化,但我不知道规范化.
AudioFileID inFileID;
CFURLRef inURL = [self inSoundURL];
AudioFileOpenURL(inURL, kAudioFileReadPermission, kAudioFileWAVEType, &inFileID)
UInt32 fileSize = [self audioFileSize:inFileID];
Float32 *inData = malloc(fileSize * sizeof(Float32)); //I used Float32 type with jv42's suggestion
AudioFileReadBytes(inFileID, false, 0, &fileSize, inData);
Float32 *outData = malloc(fileSize * sizeof(Float32));
//Art's suggestion, if I've correctly understood him
float ampScale = 0.5f; //this will reduce the 'volume' by -6db
for (int i = 0; i < fileSize; i++) {
outData[i] = (Float32)(inData[i] * ampScale);
}
AudioStreamBasicDescription outDataFormat = {0};
[self audioDataFormat:inFileID];
AudioFileID outFileID;
CFURLRef outURL = [self outSoundURL];
AudioFileCreateWithURL(outURL, kAudioFileWAVEType, &outDataFormat, kAudioFileFlags_EraseFile, &outFileID)
AudioFileWriteBytes(outFileID, false, 0, &fileSize, outData);
AudioFileClose(outFileID);
AudioFileClose(inFileID);
Run Code Online (Sandbox Code Playgroud)
Art*_*pie 14
你不会在(Ext)AudioFile中找到幅度缩放操作,因为它是关于你可以做的最简单的DSP.
假设您使用ExtAudioFile将您读取的任何内容转换为32位浮点数.要改变幅度,您只需乘以:
float ampScale = 0.5f; //this will reduce the 'volume' by -6db
for (int ii=0; ii<numSamples; ++ii) {
*sampOut = *sampIn * ampScale;
sampOut++; sampIn++;
}
Run Code Online (Sandbox Code Playgroud)
要增加增益,只需使用比例> 1.f. 例如,放大2.f的放大器会给你+ 6dB的增益.
如果要进行标准化,则必须对音频进行两次传递:一次确定幅度最大的样本.然后另一个实际应用你的计算增益.
使用AudioQueue服务只是为了访问卷属性是严重的,严重的过度杀伤.
更新:
在更新的代码中,您将每个字节乘以0.5而不是每个样本.这是对代码的快速修复,但请参阅下面的注释.我不会做你正在做的事情.
...
// create short pointers to our byte data
int16_t *inDataShort = (int16_t *)inData;
int16_t *outDataShort = (int16_t *)inData;
int16_t ampScale = 2;
for (int i = 0; i < fileSize; i++) {
outDataShort[i] = inDataShort[i] / ampScale;
}
...
Run Code Online (Sandbox Code Playgroud)
当然,这不是最好的做事方式:它假设你的文件是little-endian 16位有符号线性PCM.(大多数WAV文件都是,但不是AIFF,m4a,mp3等)我会使用ExtAudioFile API而不是AudioFile API,因为这会将您正在阅读的任何格式转换为您希望在代码中使用的任何格式.通常最简单的方法是将样本读取为32位浮点数.下面是使用ExtAudioAPI处理任何输入文件格式的代码示例,包括立体声单声道
void ScaleAudioFileAmplitude(NSURL *theURL, float ampScale) {
OSStatus err = noErr;
ExtAudioFileRef audiofile;
ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
assert(audiofile);
// get some info about the file's format.
AudioStreamBasicDescription fileFormat;
UInt32 size = sizeof(fileFormat);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);
// we'll need to know what type of file it is later when we write
AudioFileID aFile;
size = sizeof(aFile);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
AudioFileTypeID fileType;
size = sizeof(fileType);
err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);
// tell the ExtAudioFile API what format we want samples back in
AudioStreamBasicDescription clientFormat;
bzero(&clientFormat, sizeof(clientFormat));
clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
clientFormat.mBytesPerFrame = 4;
clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBitsPerChannel = 32;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mSampleRate = fileFormat.mSampleRate;
clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// find out how many frames we need to read
SInt64 numFrames = 0;
size = sizeof(numFrames);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);
// create the buffers for reading in data
AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
bufferList->mBuffers[ii].mNumberChannels = 1;
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
}
// read in the data
UInt32 rFrames = (UInt32)numFrames;
err = ExtAudioFileRead(audiofile, &rFrames, bufferList);
// close the file
err = ExtAudioFileDispose(audiofile);
// process the audio
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
float *fBuf = (float *)bufferList->mBuffers[ii].mData;
for (int jj=0; jj < rFrames; ++jj) {
*fBuf = *fBuf * ampScale;
fBuf++;
}
}
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);
// tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// write the data
err = ExtAudioFileWrite(audiofile, rFrames, bufferList);
// close the file
ExtAudioFileDispose(audiofile);
// destroy the buffers
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
bufferList = NULL;
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6228 次 |
| 最近记录: |