Objective-c - 如何将音频文件序列化为可以播放的小数据包?

vfn*_*vfn 0 audio macos serialization objective-c audioqueueservices

所以,我想得到一个声音文件并将其转换为数据包,并将其发送到另一台计算机.我希望其他计算机能够在数据包到达时播放.

我正在使用AVAudioPlayer尝试播放此数据包,但我找不到一种正确的方法来序列化peer1可以播放的peer1上的数据.

方案是,peer1有一个音频文件,将音频文件分成许多小包,将它们放在NSData上并发送给peer2.对等体2收到数据包并在它们到达时逐个播放.

有谁知道怎么做?或者即使有可能吗?

编辑:

这里是一段代码来说明我想要实现的目标.


// This code is part of the peer1, the one who sends the data
- (void)sendData
{
    int packetId = 0;
    NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"];

    NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath];
    NSMutableArray *arraySoundData = [[NSMutableArray alloc] init];

    // Spliting the audio in 2 pieces
    // This is only an illustration
    // The idea is to split the data into multiple pieces
    // dependin on the size of the file to be sent
    NSRange soundRange;
    soundRange.length = [soundData length]/2;
    soundRange.location = 0;
    [arraySoundData addObject:[soundData subdataWithRange:soundRange]];
    soundRange.length = [soundData length]/2;
    soundRange.location = [soundData length]/2;
    [arraySoundData addObject:[soundData subdataWithRange:soundRange]];

    for (int i=0; i<[arraySoundData count]; i++)
    {
        NSData *soundPacket = [arraySoundData objectAtIndex:i];

        if(soundPacket == nil)
        {
            NSLog(@"soundData is nil");
            return;
        }       

        NSMutableData* message = [[NSMutableData alloc] init];
        NSKeyedArchiver* archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:message];
        [archiver encodeInt:packetId++ forKey:PACKET_ID];
        [archiver encodeObject:soundPacket forKey:PACKET_SOUND_DATA];
        [archiver finishEncoding];      

        NSError* error = nil;
        [connectionManager sendMessage:message error:&error];
        if (error) NSLog (@"send greeting failed: %@" , [error localizedDescription]);

        [message release];
        [archiver release];
    }

    [soundData release];
    [arraySoundData release];
}
Run Code Online (Sandbox Code Playgroud)

// This is the code on peer2 that would receive and play the piece of audio on each packet

- (void) receiveData:(NSData *)data
{

    NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data];

    if ([unarchiver containsValueForKey:PACKET_ID])
        NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]);

    if ([unarchiver containsValueForKey:PACKET_SOUND_DATA])
    {
        NSLog(@"DECODED sound");
        NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA];

        if (sound == nil)
        {
            NSLog(@"sound is nil!");

        }
        else
        {
            NSLog(@"sound is not nil!");

            AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc];

            if ([audioPlayer initWithData:sound error:nil])
            {
                [audioPlayer prepareToPlay];
                [audioPlayer play];
            } else {
                [audioPlayer release];
                NSLog(@"Player couldn't load data");
            }   
        }
    }

    [unarchiver release];
}
Run Code Online (Sandbox Code Playgroud)

所以,这就是我想要实现的......所以,我真正需要知道的是如何创建数据包,因此peer2可以播放音频.

这将是一种流媒体.是的,现在我并不担心接收或播放数据包的顺序......我只需要将声音切成片,并且它们能够播放每个片段,每个片段,而无需等待整个文件被由peer2收到.

谢谢!

Vla*_*mir 8

这是用AQ播放文件最简单的类注意你可以从任何一点播放它(只需设置currentPacketNumber)

#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>

@interface AudioFile : NSObject {
    AudioFileID                     fileID;     // the identifier for the audio file to play
    AudioStreamBasicDescription     format;
    UInt64                          packetsCount;           
    UInt32                          maxPacketSize;  
}

@property (readwrite)           AudioFileID                 fileID;
@property (readwrite)           UInt64                      packetsCount;
@property (readwrite)           UInt32                      maxPacketSize;

- (id) initWithURL: (CFURLRef) url;
- (AudioStreamBasicDescription *)audioFormatRef;

@end


//  AudioFile.m

#import "AudioFile.h"


@implementation AudioFile

@synthesize fileID;
@synthesize format;
@synthesize maxPacketSize;
@synthesize packetsCount;

- (id)initWithURL:(CFURLRef)url{
    if (self = [super init]){       
        AudioFileOpenURL(
                         url,
                         0x01, //fsRdPerm, read only
                         0, //no hint
                         &fileID
                         );

        UInt32 sizeOfPlaybackFormatASBDStruct = sizeof format;
        AudioFileGetProperty (
                              fileID, 
                              kAudioFilePropertyDataFormat,
                              &sizeOfPlaybackFormatASBDStruct,
                              &format
                              );

        UInt32 propertySize = sizeof (maxPacketSize);

        AudioFileGetProperty (
                              fileID, 
                              kAudioFilePropertyMaximumPacketSize,
                              &propertySize,
                              &maxPacketSize
                              );

        propertySize = sizeof(packetsCount);
        AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataPacketCount, &propertySize, &packetsCount);
    }
    return self;
} 

-(AudioStreamBasicDescription *)audioFormatRef{
    return &format;
}

- (void) dealloc {
    AudioFileClose(fileID);
    [super dealloc];
}



//  AQPlayer.h

#import <Foundation/Foundation.h>
#import "AudioFile.h"

#define AUDIOBUFFERS_NUMBER     3
#define MAX_PACKET_COUNT    4096

@interface AQPlayer : NSObject {
@public
    AudioQueueRef                   queue;
    AudioQueueBufferRef             buffers[AUDIOBUFFERS_NUMBER];
    NSInteger                       bufferByteSize;
    AudioStreamPacketDescription    packetDescriptions[MAX_PACKET_COUNT];

    AudioFile * audioFile;
    SInt64  currentPacketNumber;
    UInt32  numPacketsToRead;
}

@property (nonatomic)               SInt64          currentPacketNumber;
@property (nonatomic, retain)       AudioFile       * audioFile;

-(id)initWithFile:(NSString *)file;
-(NSInteger)fillBuffer:(AudioQueueBufferRef)buffer;
-(void)play;

@end 

//  AQPlayer.m

#import "AQPlayer.h"

static void AQOutputCallback(void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
    AQPlayer * aqp = (AQPlayer *)inUserData;
    [aqp fillBuffer:(AudioQueueBufferRef)inBuffer];
}

@implementation AQPlayer

@synthesize currentPacketNumber;
@synthesize audioFile;

-(id)initWithFile:(NSString *)file{
    if ([self init]){
        audioFile = [[AudioFile alloc] initWithURL:[NSURL fileURLWithPath:file]];
        currentPacketNumber = 0;
        AudioQueueNewOutput ([audioFile audioFormatRef], AQOutputCallback, self, CFRunLoopGetCurrent (), kCFRunLoopCommonModes, 0, &queue);
        bufferByteSize = 4096;
        if (bufferByteSize < audioFile.maxPacketSize) bufferByteSize = audioFile.maxPacketSize; 
        numPacketsToRead = bufferByteSize/audioFile.maxPacketSize;
        for(int i=0; i<AUDIOBUFFERS_NUMBER; i++){
            AudioQueueAllocateBuffer (queue, bufferByteSize, &buffers[i]);
        }
    }
    return self;
}

-(void) dealloc{
    [audioFile release];
    if (queue){
        AudioQueueDispose(queue, YES);
        queue = nil;
    }
    [super dealloc];
}

- (void)play{
    for (int bufferIndex = 0; bufferIndex < AUDIOBUFFERS_NUMBER; ++bufferIndex){
        [self fillBuffer:buffers[bufferIndex]];
    }
    AudioQueueStart (queue, NULL);

}

-(NSInteger)fillBuffer:(AudioQueueBufferRef)buffer{
    UInt32 numBytes;
    UInt32 numPackets = numPacketsToRead;
    BOOL isVBR = [audioFile audioFormatRef]->mBytesPerPacket == 0 ? YES : NO;
    AudioFileReadPackets(
                         audioFile.fileID,
                         NO,
                         &numBytes,
                         isVBR ? packetDescriptions : 0,
                         currentPacketNumber,
                         &numPackets, 
                         buffer->mAudioData
                         );

    if (numPackets > 0) {
        buffer->mAudioDataByteSize = numBytes;      
        AudioQueueEnqueueBuffer (
                                 queue,
                                 buffer,
                                 isVBR ? numPackets : 0,
                                 isVBR ? packetDescriptions : 0
                                 );


    } 
    else{
        // end of present data, check if all packets are played
        // if yes, stop play and dispose queue
        // if no, pause queue till new data arrive then start it again
    }
    return  numPackets;
}
Run Code Online (Sandbox Code Playgroud)

  • 在文件开头不可用时播放数据(对于vfn以前评论中的peerC),您最初应该发送AudioStreamBasicDescription格式;,UInt64 packetsCount;,UInt32 maxPacketSize; 创建audiqueue,然后使用AudioFileStreamParseBytes将文件中的任何段提供给AudioFileStream,并在AudioFileStream_PacketsProc中获取解析数据以填充AQ缓冲区. (2认同)

Vla*_*mir 5

看来你正在解决错误的任务,因为AVAudioPlayer只能播放整个音频文件.您应该使用音频队列服务AudioToolbox框架相反,对数据包逐包基础上播放音频.实际上你不需要将音频文件分成真正的声音数据包,你可以使用上面你自己的例子中的任何数据块,但是你应该使用Audiofile服务音频文件流服务功能(来自AudioToolbox)读取接收的数据块并将它们提供给audioqueue缓冲区.

如果您仍想将音频文件分成声音包,您可以使用Audiofile服务功能轻松完成.Audiofile由标题组成,其标题包括数据包数,采样率,通道数等,以及原始声音数据.

使用AudioFileOpenURL打开audiofile并使用AudioFileGetProperty函数获取其所有属性.基本上你只需要kAudioFilePropertyDataFormat和kAudioFilePropertyAudioDataPacketCount属性:

AudioFileID  fileID;    // the identifier for the audio file
CFURLRef     fileURL = ...; // file URL
AudioStreamBasicDescription format; // structure containing audio header info
    UInt64  packetsCount;

AudioFileOpenURL(fileURL, 
    0x01, //fsRdPerm,                       // read only
    0, //no hint
    &fileID
);

UInt32 sizeOfPlaybackFormatASBDStruct = sizeof format;
AudioFileGetProperty (
    fileID, 
    kAudioFilePropertyDataFormat,
    &sizeOfPlaybackFormatASBDStruct,
    &format
);

propertySize = sizeof(packetsCount);
AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataPacketCount, &propertySize, &packetsCount);
Run Code Online (Sandbox Code Playgroud)

然后您可以使用以下任意范围的音频包装数据:

   OSStatus AudioFileReadPackets (
       AudioFileID                  inAudioFile,
       Boolean                      inUseCache,
       UInt32                       *outNumBytes,
       AudioStreamPacketDescription *outPacketDescriptions,
       SInt64                       inStartingPacket,
       UInt32                       *ioNumPackets,
       void                         *outBuffer
    );
Run Code Online (Sandbox Code Playgroud)