我一直在尝试使用AVAssetWriter和AVAssetWriterInputs编写视频+音频.
我在这个论坛上看了很多人说他们能够做到这一点,但这对我不起作用.如果我只是写视频,那么代码就能很好地完成它的工作.当我添加音频时,输出文件已损坏且无法再现.
这是我的代码的一部分:
设置AVCaptureVideoDataOutput和AVCaptureAudioDataOutput:
NSError *error = nil;
// Setup the video input
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
// Setup the video output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
_videoOutput.alwaysDiscardsLateVideoFrames = NO;
_videoOutput.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Setup the audio input
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ];
// Setup the …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用图片阵列和音频文件制作电影文件.为了制作带有图片阵列的电影,我在这里使用了zoul的大帖子.一切都很完美,我的电影和我的照片一样.但是,当我尝试添加一些音轨时,我遇到了很多问题.要理解我把我的代码:
当我调用此方法时,图片数组和歌曲文件已准备就绪:
-(void) writeImagesToMovieAtPath:(NSString *) path withSize:(CGSize) size
{
NSString *documentsDirectoryPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSArray *dirContents = [[NSFileManager defaultManager] directoryContentsAtPath:documentsDirectoryPath];
for (NSString *tString in dirContents) {
if ([tString isEqualToString:@"essai.mp4"])
{
[[NSFileManager defaultManager]removeItemAtPath:[NSString stringWithFormat:@"%@/%@",documentsDirectoryPath,tString] error:nil];
}
}
NSLog(@"Write Started");
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary …Run Code Online (Sandbox Code Playgroud) 我必须从我的iPhone应用程序导出一部电影,其中包含来自NSArray的UIImage,并添加一些必须在预先指定的时间开始的.caf格式的音频文件.现在我已经能够使用AVAssetWriter导出包含图像的视频部分(经历就这个问题和其他网站的许多问题和答案后),但不能似乎找到一个方法来添加音频文件来完成电影.
这是我到目前为止所得到的
-(void) writeImagesToMovieAtPath:(NSString *) path withSize:(CGSize) size
{
NSLog(@"Write Started");
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0; …Run Code Online (Sandbox Code Playgroud) 我有一个功能,应该在iphone/ipad上将视频重新编码为可管理的比特率.这是:*更新的工作代码,现在有音频!:)*
-(void)resizeVideo:(NSString*)pathy{
NSString *newName = [pathy stringByAppendingString:@".down.mov"];
NSURL *fullPath = [NSURL fileURLWithPath:newName];
NSURL *path = [NSURL fileURLWithPath:pathy];
NSLog(@"Write Started");
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:fullPath fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
AVAsset *avAsset = [[[AVURLAsset alloc] initWithURL:path options:nil] autorelease];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:1280], AVVideoWidthKey,
[NSNumber numberWithInt:720], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
NSError *aerror = nil;
AVAssetReader *reader = [[AVAssetReader alloc] …Run Code Online (Sandbox Code Playgroud) 我想导出一部电影,AVAssetWriter但无法弄清楚如何同步包含视频和音频轨道.仅导出视频工作正常,但是当我添加音频时,生成的电影如下所示:
首先,我看到视频(没有音频),然后视频冻结(显示最后一个图像帧直到结束),几秒钟后我听到音频.
我尝试了一些事情CMSampleBufferSetOutputPresentationTimeStamp(CMSampleBufferGetPresentationTimeStamp从当前减去第一个)用于音频,但这一切都不起作用,我认为这不是正确的方向,因为源电影中的视频和音频应该是同步的. ..
我的设置简单:我创建一个AVAssetReader和2 AVAssetReaderTrackOutput(一个用于视频,一个用于音频)并将它们添加到AVAssetReader,然后我创建一个AVAssetWriter和2 AVAssetWriterInput(视频和音频)并将它们添加到AVAssetWriter...我开始一切:
[assetReader startReading];
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
Run Code Online (Sandbox Code Playgroud)
然后我运行2个队列来做样本缓冲区的东西:
dispatch_queue_t queueVideo=dispatch_queue_create("assetVideoWriterQueue", NULL);
[assetWriterVideoInput requestMediaDataWhenReadyOnQueue:queueVideo usingBlock:^
{
while([assetWriterVideoInput isReadyForMoreMediaData])
{
CMSampleBufferRef sampleBuffer=[assetReaderVideoOutput copyNextSampleBuffer];
if(sampleBuffer)
{
[assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
} else
{
[assetWriterVideoInput markAsFinished];
dispatch_release(queueVideo);
videoFinished=YES;
break;
}
}
}];
dispatch_queue_t queueAudio=dispatch_queue_create("assetAudioWriterQueue", NULL);
[assetWriterAudioInput requestMediaDataWhenReadyOnQueue:queueAudio usingBlock:^
{
while([assetWriterAudioInput isReadyForMoreMediaData])
{
CMSampleBufferRef sampleBuffer=[assetReaderAudioOutput copyNextSampleBuffer];
if(sampleBuffer)
{
[assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
} else
{ …Run Code Online (Sandbox Code Playgroud) 我有一个奇怪的记忆"泄漏" AVAssetWriterInput appendSampleBuffer.我正在同时写视频和音频,所以我有一个AVAssetWriter有两个输入,一个用于视频,一个用于音频:
self.videoWriter = [[[AVAssetWriter alloc] initWithURL:[self.currentVideo currentVideoClipLocalURL]
fileType:AVFileTypeMPEG4
error:&error] autorelease];
...
self.videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
self.videoWriterInput.expectsMediaDataInRealTime = YES;
[self.videoWriter addInput:self.videoWriterInput];
...
self.audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:audioSettings];
self.audioWriterInput.expectsMediaDataInRealTime = YES;
[self.videoWriter addInput:self.audioWriterInput];
Run Code Online (Sandbox Code Playgroud)
我开始写作,表面上的一切都很好.视频和音频被写入并对齐等等.但是,我将我的代码放入分配工具并注意到以下内容:

音频字节将保留在内存中,我将在一秒钟内证明.这是内存中的增长.音频字节仅在我调用后才会释放[self.videoWriter endSessionAtSourceTime:...],您会看到内存使用量急剧下降.这是我的音频编写代码,它作为一个块被分派到一个串行队列:
@autoreleasepool
{
// The objects that will hold the audio data
CMSampleBufferRef sampleBuffer;
CMBlockBufferRef blockBuffer1;
CMBlockBufferRef blockBuffer2;
size_t nbytes = numSamples * asbd_.mBytesPerPacket;
OSStatus status = noErr;
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
data,
nbytes,
kCFAllocatorNull,
NULL,
0,
nbytes,
kCMBlockBufferAssureMemoryNowFlag,
&blockBuffer1); …Run Code Online (Sandbox Code Playgroud) 我正在使用AVAssetWriter,它完全适用于iOS6.
问题是,当我打电话时finishWritingWithCompletionHandler,iOS7 GM上没有调用完成处理程序.
我打电话markAsFinished,甚至endSessionAtSourceTime在我调用finishWritingWithCompletionHandler之前.
它在iOS6上运行良好.
甚至更多,在iOS7上,它可以工作一段时间,然后它再也不起作用了.
我不知道为什么,但如果我使用警报视图调用该方法,它就可以工作.于是,我就performSelectorOnMainThread和inBackground,但它并没有帮助.
有任何想法吗?
我想拍摄一些视频帧并将它们编码成视频.看起来这正是它的AVAssetWriter意思,但无论我如何看待文档和谷歌我都找不到任何方法来实际使用它.从文档中看起来我需要一个input(AVAssetWriterInput)来从中提供编写器.精细.但是这个AVAssetWriterInput类是抽象的,我在4.1中知道的唯一子类是AVAssetWriterInputPixelBufferAdaptor它AVAssetWriterInput在初始化器中需要一个......?我错过了一些明显的东西吗?
一直试图弄明白这一点,但没有成功.
我可以写视频输出没问题...但是一旦我尝试引入第二个AVAssetWriterInput来包含音频,最后的quicktime电影是跳跃的,帧左右丢失,音频不断进出.
谢谢 - wg
avassetwriter ×10
iphone ×5
avfoundation ×4
ios ×4
objective-c ×4
audio ×3
video ×3
export ×1
ios4 ×1
ios7 ×1
ipad ×1
memory-leaks ×1