我正在使用AVPlayer播放.m3u8文件.使用AVAssetImageGenerator使用以下代码从中提取图像:
AVURLAsset *asset1 = [[AVURLAsset alloc] initWithURL:mp.contentURL options:nil];
AVAssetImageGenerator *generate1 = [[AVAssetImageGenerator alloc] initWithAsset:asset1];
generate1.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef oneRef = [generate1 copyCGImageAtTime:time actualTime:NULL error:&err];
img = [[UIImage alloc] initWithCGImage:oneRef];
Run Code Online (Sandbox Code Playgroud)
它总是给我错误:
错误域= AVFoundationErrorDomain代码= -11800"操作无法完成"UserInfo = 0x7fb4e30cbfa0 {NSUnderlyingError = 0x7fb4e0e28530"操作无法完成.(OSStatus error -12782.)",NSLocalizedFailureReason =发生未知错误(-12782) ),NSLocalizedDescription =操作无法完成}
它适用于mp4,mov和所有主要视频扩展URL但不适用于m3u8.任何的想法??
我正在使用正在使用的OS X应用程序,AVAssetImageGenerator.generateCGImagesAsynchronouslyForTimes
它通常可以正常工作.然而,偶尔我回来的缩略图只包含前几行像素,其余的是绿色,有时图像将是不同的绿色阴影.追踪是非常困难的,因为它不会一直发生,但是当它确实有大约一半的缩略图受到影响时.这是我期望看到的图像:
但通常会发生这种情况:
这是我用来生成缩略图的代码:
let assetGenerator = AVAssetImageGenerator(asset: AVURLAsset(URL: url))
assetGenerator.appliesPreferredTrackTransform = true
let time = CMTime(seconds: 0, preferredTimescale: 30)
let handler: AVAssetImageGeneratorCompletionHandler = { _, image, _, res, error in
defer { dispatch_group_leave(self.waitForThumbnail!) }
guard let image = image where res == .Succeeded else {
if let error = error { print(error) }
return
}
let s = CGSize(width: CGImageGetWidth(image), height: CGImageGetHeight(image))
self.thumbnail = NSImage(CGImage: image, size: s)
}
waitForThumbnail = dispatch_group_create()
dispatch_group_enter(waitForThumbnail!)
assetGenerator.maximumSize = maxThumbnailSize
assetGenerator.generateCGImagesAsynchronouslyForTimes([NSValue(CMTime: …
Run Code Online (Sandbox Code Playgroud) 到目前为止,我有以下几点
let assetUrl = NSURL.URLWithString(self.targetVideoString)
let asset: AVAsset = AVAsset.assetWithURL(assetUrl) as AVAsset
let imageGenerator = AVAssetImageGenerator(asset: asset);
let time : CMTime = CMTimeMakeWithSeconds(1.0, 1)
let actualTime : CMTime
let myImage: CGImage =imageGenerator.copyCGImageAtTime(requestedTime: time, actualTime:actualTime, error: <#NSErrorPointer#>)
Run Code Online (Sandbox Code Playgroud)
最后一行是我迷路的地方......我只是想在1.0秒时抓到一张图片
我正在尝试创建一个视频库.
要显示我正在使用的视频UICollectionView
.每个UICollectionViewCell
都有一个背景与视频缩略图.要生成视频缩略图,我使用的是逻辑方法:
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform = YES;
CMTime time = CMTimeMakeWithSeconds(0,15);
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime timeRequested, CGImageRef image, CMTime timeActual, AVAssetImageGeneratorResult result, NSError *error)
{
NSLog(@"handler^()");
if (result == AVAssetImageGeneratorSucceeded)
{
thumbnail = [UIImage imageWithCGImage: image];
success(thumbnail);
}
else
{
failure(error);
}
};
CGSize maximumSize = CGSizeMake(CLIPBOARD_COLLECTION_VIEW_CELL_WIDTH, CLIPBOARD_COLLECTION_VIEW_CELL_HEIGHT);
generator.maximumSize = maximumSize;
NSLog(@"generateCGImagesAsynchronouslyForTimes:");
[generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:time]] completionHandler:handler];
Run Code Online (Sandbox Code Playgroud)
我注意到generateCGImagesAsynchronouslyForTimes
它不能完全异步工作.此方法调用之间有一个时间空间.当我加载表格视图单元格时,这会导致很大的延迟.如果我评论行[generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:time]] completionHandler:handler] …
在我的应用程序中,我从单个图像创建视频.一切正常,视频组装正确,大小和方向正确.它们在Apple照片应用程序,MPMoviePlayer和我保存它们的沙箱目录中都能正确显示.
当我试图从电影中获取拇指时出现问题.方向不正确,我不知道如何解决,我已经看到有一个 - preferredTransform
属性,但结果是景观和肖像视频相同.
我正在使用的URL是沙箱目录路径.这是片段:
- (void) setVideoPath:(NSString *)videoPath {
if (videoPath ==_videoPath) {
return;
}
_videoPath = videoPath;
AVAsset *asset = [AVAsset assetWithURL:[NSURL fileURLWithPath:_videoPath]];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime time = CMTimeMake(1, 1);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
self.videoImageView.image = thumbnail;
}
Run Code Online (Sandbox Code Playgroud) 我需要拍摄视频,将每个帧转换为图像并将这些图像保存到磁盘.我想AVAssetImageGenerator
为了效率而使用,并且代码类似于以下内容:
问题是我不知道所有图像生成何时完成,但是我需要在所有帧都写入磁盘后采取行动.例如:
assetGenerator.generateCGImagesAsynchronously(forTimes: frameTimes, completionHandler: { (requestedTime, image, actualTime, result, error) in
// 1. Keep a reference to each image
// 2. Wait until all images are generated
// 3. Process images as a set
})
Run Code Online (Sandbox Code Playgroud)
这是上面的第二步,这让我感到沮丧.我想我可以尝试计算调用完成处理程序的次数,并在计数等于帧数时触发适当的方法.
但我想知道是否有办法使用API来了解每个帧何时被处理?也许只是我错过的东西?任何指导或建议将不胜感激.
我AVAssetImageGenerator
用来从视频的最后一帧创建图像。这通常可以正常工作,但copyCGImageAtTime
有时会因错误而失败
NSLocalizedDescription = "Cannot Open";
NSLocalizedFailureReason = "This media cannot be used.";
NSUnderlyingError = "Error Domain=NSOSStatusErrorDomain Code=-12431";
Run Code Online (Sandbox Code Playgroud)
我正在验证AVAsset不为零,并且我是直接从资产中提取CMTime,所以我不明白为什么这种情况一直在发生。仅在尝试获取最后一帧时才会发生这种情况,如果我改用kCMTimeZero
它,似乎可以正常工作。
- (void)getLastFrameFromAsset:(AVAsset *)asset completionHandler:(void (^)(UIImage *image))completion
{
NSAssert(asset, @"Tried to generate last frame from nil asset");
AVAssetImageGenerator *gen = [[AVAssetImageGenerator alloc] initWithAsset:asset];
gen.requestedTimeToleranceBefore = kCMTimeZero;
gen.requestedTimeToleranceAfter = kCMTimeZero;
gen.appliesPreferredTrackTransform = YES;
CMTime time = [asset duration];
NSError *error = nil;
CMTime actualTime;
CGImageRef imageRef = [gen copyCGImageAtTime:time actualTime:&actualTime error:&error];
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
NSAssert(image, …
Run Code Online (Sandbox Code Playgroud) 我使用AVCaptureSession
会话预设捕获视频是
session!.sessionPreset = AVCaptureSessionPreset1280x720
Run Code Online (Sandbox Code Playgroud)
并使用此代码从视频中提取图像
func videoThumbnails(url: NSURL ){
let asset = AVAsset(URL: url)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true
imageGenerator.maximumSize = CGSizeMake(720, 1280)
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero
var time = asset.duration
let totalTime = time
var frames = 0.0
let singleFrame = Double(time.seconds) / 4
while (frames < totalTime.seconds) {
frames += singleFrame
time.value = (Int64(frames)) * Int64(totalTime.timescale)
do {
let imageRef = try imageGenerator.copyCGImageAtTime(time, actualTime: nil)
self.sendImage.append(UIImage(CGImage: imageRef))
}
catch let error as NSError
{ …
Run Code Online (Sandbox Code Playgroud) avfoundation ios avcapturesession avassetimagegenerator swift
我尝试从视频中读取一些帧。视频是 640 x 480,但我得到的图像只有 480 x 360。有没有办法获得原始尺寸的图像?
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString * mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:(NSString *)kUTTypeMovie])
[self readMovieFrames:[info objectForKey:UIImagePickerControllerMediaURL]];
[self dismissViewControllerAnimated:YES completion:nil];
}
- (void)readMovieFrames:(NSURL *)url
{
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL:url];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:playerItem.asset];
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero;
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero;
imageGenerator.appliesPreferredTrackTransform = YES;
//…
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:requestTime actualTime:&actualTime error:&error];
UIImage *img = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
CGImageRelease(imageRef);
}
Run Code Online (Sandbox Code Playgroud) 根据文档,generateCGImagesAsynchronously
接收NSValue
视频和给定时间的数组并生成帧,并将其作为回调返回。
我使用以下代码生成值列表
var values : [NSValue] = []
let frameDuration = CMTimeMake(1, timeScale)
for i in 0..<duration{
let lastFrameTime = CMTimeMake(Int64(i), timeScale)
let presentationTime = (i == 0) ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
values.append(NSValue(time: presentationTime))
//Next two lines of codes are just to cross check the output
let image = try! imageGenerator.copyCGImage(at: presentationTime, actualTime: nil)
let imageUrl = FileManagerUtil.getTempFileName(parentFolder: FrameExtractor.EXTRACTED_IMAGE, fileNameWithExtension: "\(Constants.FRAME_SUFFIX)\(i)\(".jpeg")")
}
Run Code Online (Sandbox Code Playgroud)
正如您在上面的代码中看到的,我使用同步方法交叉检查了结果,我可以确认数组values
保存了正确的时间参考。
但是,当将相同的数组传递给generateImageAsynchronously
方法时,对于不同的时间戳,我会得到相同帧的重复 10 次。也就是说,如果我的视频时长 10 秒,那么我会得到 300 帧(30 fps),但第一秒的帧每个重复 …
avfoundation ×7
ios ×7
avasset ×4
swift ×4
objective-c ×2
avplayer ×1
iphone ×1
swift4 ×1
thumbnails ×1
uitableview ×1
video ×1