Pet*_*ete 5 iphone objective-c avfoundation ios avassetwriter
我正在从 iOS 上的 Unity 应用程序生成视频。我正在使用 iVidCap,它使用 AVFoundation 来执行此操作。那一侧一切正常。本质上,视频是通过使用纹理渲染目标并将帧传递给 Obj-C 插件来渲染的。
现在我需要向视频添加音频。音频将是在特定时间发生的声音效果,也可能是一些背景声音。所使用的文件实际上是 Unity 应用程序内部的资产。我可以将它们写入手机存储,然后生成 AVComposition,但我的计划是避免这种情况并将音频合成在浮点格式缓冲区中(从音频剪辑获取音频采用浮点格式)。稍后我可能会做一些动态音频效果。
几个小时后,我设法录制音频并与视频一起播放......但它很卡顿。
目前,我只是在视频的每帧持续时间内生成一个方波并将其写入 AVAssetWriterInput。稍后,我将生成我真正想要的音频。
如果我生成大量样本,我就不会出现卡顿的情况。如果我把它写成块(我更喜欢分配一个巨大的数组),那么音频块似乎会互相剪辑:

我似乎无法弄清楚这是为什么。我很确定我得到的音频缓冲区的时间戳是正确的,但也许我整个部分都做错了。或者我是否需要一些标志来使视频与音频同步?我看不出这是问题所在,因为将音频数据提取到 wav 后,我可以在波形编辑器中看到问题。
编写音频的相关代码:
- (id)init {
self = [super init];
if (self) {
// [snip]
rateDenominator = 44100;
rateMultiplier = rateDenominator / frameRate;
sample_position_ = 0;
audio_fmt_desc_ = nil;
int nchannels = 2;
AudioStreamBasicDescription audioFormat;
bzero(&audioFormat, sizeof(audioFormat));
audioFormat.mSampleRate = 44100;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = nchannels;
int bytes_per_sample = sizeof(float);
audioFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsAlignedHigh;
audioFormat.mBitsPerChannel = bytes_per_sample * 8;
audioFormat.mBytesPerPacket = bytes_per_sample * nchannels;
audioFormat.mBytesPerFrame = bytes_per_sample * nchannels;
CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
&audioFormat,
0,
NULL,
0,
NULL,
NULL,
&audio_fmt_desc_
);
}
return self;
}
- (BOOL)beginRecordingSession {
NSError* error = nil;
isAborted = false;
abortCode = No_Abort;
// Allocate the video writer object.
videoWriter = [[AVAssetWriter alloc] initWithURL:[self getVideoFileURLAndRemoveExisting:
recordingPath] fileType:AVFileTypeMPEG4 error:&error];
if (error) {
NSLog(@"Start recording error: %@", error);
}
// Configure video compression settings.
NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:1024.0 * 1024.0], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:10],AVVideoMaxKeyFrameIntervalKey,
nil];
// Configure video settings.
NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:frameSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:frameSize.height], AVVideoHeightKey,
videoCompressionProps, AVVideoCompressionPropertiesKey,
nil];
// Create the video writer that is used to append video frames to the output video
// stream being written by videoWriter.
videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain];
//NSParameterAssert(videoWriterInput);
videoWriterInput.expectsMediaDataInRealTime = YES;
// Configure settings for the pixel buffer adaptor.
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
// Create the pixel buffer adaptor, used to convert the incoming video frames and
// append them to videoWriterInput.
avAdaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes] retain];
[videoWriter addInput:videoWriterInput];
// <pb> Added audio input.
sample_position_ = 0;
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary* audioOutputSettings = nil;
audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey,
[ NSNumber numberWithInt: 2 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil];
audioWriterInput = [[AVAssetWriterInput
assetWriterInputWithMediaType: AVMediaTypeAudio
outputSettings: audioOutputSettings ] retain];
//audioWriterInput.expectsMediaDataInRealTime = YES;
audioWriterInput.expectsMediaDataInRealTime = NO; // seems to work slightly better
[videoWriter addInput:audioWriterInput];
rateDenominator = 44100;
rateMultiplier = rateDenominator / frameRate;
// Add our video input stream source to the video writer and start it.
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMTimeMake(0, rateDenominator)];
isRecording = true;
return YES;
}
- (int) writeAudioBuffer:(float *)samples sampleCount:(size_t)n channelCount:(size_t)nchans {
if (![self waitForAudioWriterReadiness]) {
NSLog(@"WARNING: writeAudioBuffer dropped frame after wait limit reached.");
return 0;
}
//NSLog(@"writeAudioBuffer");
OSStatus status;
CMBlockBufferRef bbuf = NULL;
CMSampleBufferRef sbuf = NULL;
size_t buflen = n * nchans * sizeof(float);
// Create sample buffer for adding to the audio input.
status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
samples,
buflen,
kCFAllocatorNull,
NULL,
0,
buflen,
0,
&bbuf);
if (status != noErr) {
NSLog(@"CMBlockBufferCreateWithMemoryBlock error");
return -1;
}
CMTime timestamp = CMTimeMake(sample_position_, 44100);
sample_position_ += n;
status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault, bbuf, TRUE, 0, NULL, audio_fmt_desc_, 1, timestamp, NULL, &sbuf);
if (status != noErr) {
NSLog(@"CMSampleBufferCreate error");
return -1;
}
BOOL r = [audioWriterInput appendSampleBuffer:sbuf];
if (!r) {
NSLog(@"appendSampleBuffer error");
}
CFRelease(bbuf);
CFRelease(sbuf);
return 0;
}
Run Code Online (Sandbox Code Playgroud)
对发生的事情有什么想法吗?
我应该以不同的方式创建/附加示例吗?
和AAC压缩有关系吗?如果我尝试使用未压缩的音频,它不起作用(它会抛出异常)。
据我所知,我正确地计算了 PTS。为什么音频通道甚至需要这个?视频不应该与音频时钟同步吗?
我尝试以 1024 个样本的固定块提供音频,因为这是 AAC 压缩器使用的 DCT 的大小。没有任何区别。
在编写任何视频之前,我尝试过一口气推动所有块。不起作用。
我尝试对其余块使用 CMSampleBufferCreate,仅对第一个块使用 CMAudioSampleBufferCreateWithPacketDescriptions。不用找了。
我已经尝试过这些的组合。还是不对。
看起来:
audioWriterInput.expectsMediaDataInRealTime = YES;
Run Code Online (Sandbox Code Playgroud)
是必不可少的,否则它会扰乱其思想。也许这是因为视频是用这个标志设置的。此外,即使您将标志传递给它CMBlockBufferCreateWithMemoryBlock,也不复制示例数据。kCMBlockBufferAlwaysCopyDataFlag
因此,可以用它创建一个缓冲区,然后使用它进行复制CMBlockBufferCreateContiguous,以确保您获得带有音频数据副本的块缓冲区。否则它会引用你最初传入的内存,事情就会变得一团糟。
它看起来不错,尽管我会使用它,CMBlockBufferCreateWithMemoryBlock因为它复制了样本。您的代码是否可以不知道audioWriterInput 何时完成它们?
不应该kAudioFormatFlagIsAlignedHigh吗kAudioFormatFlagIsPacked?
| 归档时间: |
|
| 查看次数: |
7430 次 |
| 最近记录: |