我想写一个iPhone应用程序,允许你在很短的时间内连续拍摄2张照片,我想知道它是否可以实现.市场上的应用程序似乎只能从视频流中获取低分辨率的静止帧,所以我想知道快速捕获全分辨率照片是否可行.
我想从录制的视频中提取图像.我正在使用此代码但没有结果,图像没有出现.
我正在使用下面的代码来完成这项任务
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:[NSURL URLWithString:videoPath] options:nil];
AVAssetImageGenerator* imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
UIImage* image = [UIImage imageWithCGImage:[imageGenerator copyCGImageAtTime:CMTimeMake(0, 1) actualTime:nil error:nil]];
[videoFrame setImage:image];
Run Code Online (Sandbox Code Playgroud)
请帮助使用代码从录制的视频中提取图像
我的问题类似于上传后的iOS UIImagePickerController结果图像方向,而不是我需要修复视频方向的图像.视频也是使用imagePickerController捕获的,所以我基本上都有视频的URL.我需要将视频转换为NSData以将其上传到服务器,但在服务器上,视频方向与图像类似,具体取决于录制视频的方向.
NSURL * urlVideo;
//fix orientation here
NSData * videoDataToUpload = [NSData dataWithContentsOfURL:urlVideoFixed];
Run Code Online (Sandbox Code Playgroud)
这是我到目前为止基于我从本教程得到的内容:http://www.raywenderlich.com/13418/how-to-play-record-edit-videos-in-ios但它不起作用
-(void)videoFixOrientation{
AVAsset *firstAsset = [AVAsset assetWithURL:[self urlVideoLocalLocation]];
AVMutableComposition* mixComposition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration) ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];
AVMutableVideoCompositionLayerInstruction *FirstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
AVAssetTrack *FirstAssetTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
UIImageOrientation FirstAssetOrientation_ = UIImageOrientationUp;
BOOL isFirstAssetPortrait_ = NO;
CGAffineTransform firstTransform = FirstAssetTrack.preferredTransform;
if(firstTransform.a == 0 && firstTransform.b == 1.0 && firstTransform.c …Run Code Online (Sandbox Code Playgroud) 我正在尝试从ios设备的摄像头获取提要以在屏幕上显示,并且我不想使用ImagePickerController,因为我想对屏幕上的视图外观进行更多控制,并且希望对屏幕上的视图进行更多控制相机的光圈和采样率。
这是我在UIViewController的viewDidLoad方法中使用的代码:
//Setting up the camera
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
//This gets the back camera, which is the default video avcapturedevice
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *cameraInput = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if (cameraInput) {
[captureSession addInput:cameraInput];
}
else {
UIAlertView* alert = [[UIAlertView alloc] initWithTitle:@"Camera not found" message:@"Your device must have a camera in order to use this feature" delegate:Nil cancelButtonTitle:@"OK" otherButtonTitles:nil, nil];
[alert show];
}
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView* cameraView …Run Code Online (Sandbox Code Playgroud) 我想知道如何永远重播我的背景音乐,现在当应用加载音乐开始时,但当它结束时一切都很安静,我怎么能让它一直重播.
我在用AVFoundation Framework.
宣言:
var BackgroundMusic = AVAudioPlayer(contentsOfURL: NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("De_Hofnar_Zonnestraal", ofType: "mp3")!), error: nil)
Run Code Online (Sandbox Code Playgroud)
里面viewDidLoad:
BackgroundMusic.play()
Run Code Online (Sandbox Code Playgroud) 错误日志:由于未捕获的异常'NSInvalidArgumentException'而终止应用程序,原因:' * - [AVCaptureMetadataOutput setMetadataObjectTypes:] - 找不到支持的类型.使用-availableMetadataObjectTypes.'*第一次抛出调用堆栈:
这是调试器日志中的availableMetadataObjectTypes.我不明白为什么这是空的.
(lldb)po [输出availableMetadataObjectTypes] <__ NSArrayM 0x810ae990>()
这是代码NSError*错误;
session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetHigh];
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if ([session canAddInput:deviceInput]) {
[session addInput:deviceInput];
}
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self scannerView] layer];
[rootLayer setMasksToBounds:YES];
[previewLayer setFrame:CGRectMake(self.scannerView.frame.origin.x, self.scannerView.frame.origin.y, self.scannerView.frame.size.width, self.scannerView.frame.size.height)];
[rootLayer insertSublayer:previewLayer atIndex:0];
_labelBarcode = [[UILabel alloc] initWithFrame:CGRectMake(0, 20, 300, 40)];
_labelBarcode.backgroundColor = [UIColor darkGrayColor];
_labelBarcode.textColor = [UIColor whiteColor];
[self.scannerView …Run Code Online (Sandbox Code Playgroud) 在我的应用程序,我用AVFoundation我称之为功能showCurrentAudioProgress()使用NSTimer.scheduledTimerWithTimeInterval(0.1, target: self, selector: "showCurrentAudioProgress", userInfo: nil, repeats: true)在viewDidLoad。我想返回这种时间格式00:00:00,但是我有一个大问题。整个功能如下所示:
var musicPlayer = AVAudioPLayer()
func showCurrentAudioProgress() {
self.totalTimeOfAudio = self.musicPlayer.duration
self.currentTimeOfAudio = self.musicPlayer.currentTime
let progressToShow = Float(self.currentTimeOfAudio) / Float(self.totalTimeOfAudio)
self.audioProgress.progress = progressToShow
var totalTime = self.musicPlayer.duration
totalTime -= self.currentTimeOfAudio
self.currentTimeTimer.text = "\(currentTimeOfAudio)"
self.AudioDurationTime.text = "\(totalTime)"
}
Run Code Online (Sandbox Code Playgroud)
如何将我收到的时间转换AVAudioPlayer为00:00:00?
AVFoundation没有为我的视频添加叠加层.我不确定我做错了什么.我已经尝试使叠加层完全变白,但它没有放在视频上.当视频播放时,它必须播放AVMutableComposition Track而不是我添加的exporter.videoComposition.我对AVFoundation的经验不足以了解出了什么问题.
AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
// 3 - Video track
AVMutableCompositionTrack *videoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
// [videoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,self.videoAsset.duration)
// ofTrack:[[self.videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]
// atTime:kCMTimeZero error:nil];
CMTime insertTime = kCMTimeZero;
for(AVURLAsset *videoAsset in self.videoArray){
[videoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:insertTime error:nil];
// Updating the insertTime for the next insert
insertTime = CMTimeAdd(insertTime, videoAsset.duration);
}
// 3.1 - Create AVMutableVideoCompositionInstruction
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = videoTrack.timeRange;
// 3.2 - Create an AVMutableVideoCompositionLayerInstruction for the video …Run Code Online (Sandbox Code Playgroud) 我正在开发一个AVFoundation的项目,我可以检测到面部并在图片上添加一些内容(在拍照之前).我已经实现了预览图层和图像捕获.
我的问题是,如何引入面部检测并获取面部对象的框架/位置?是否可以在预览图层上添加一些内容,以便在图片中捕获它(想想新的Snapchat过滤器)?
TIA
我目前以以下方式导出视频:
let exporter = AVAssetExportSession.init(asset: mixComposition, presetName: AVAssetExportPreset1280x720)
exporter?.outputURL = outputPath
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.videoComposition = mainCompositionInst
Run Code Online (Sandbox Code Playgroud)
15秒的视频消耗约20MB的数据。与Snapchat的2MB视频相比,这个数字似乎是完全不能接受的。
我已经降低了导出和捕获会话的质量(1280x720)。
该视频在自定义相机上拍摄。UIImagePickerController未使用。
AVAssetExportSession与默认设置一起使用。
有什么办法可以减小视频尺寸?非常感谢!
编辑1: 我尝试使用此库:https : //cocoapods.org/pods/NextLevelSessionExporter
不幸的是,这会造成尺寸问题,并删除了我的音频:
// Creating exporter
let exporter = NextLevelSessionExporter(withAsset: mixComposition)
exporter.outputURL = outputPath
exporter.outputFileType = AVFileType.mp4
exporter.videoComposition = mainCompositionInst
let compressionDict: [String: Any] = [
AVVideoAverageBitRateKey: NSNumber(integerLiteral: 2500000),
AVVideoProfileLevelKey: AVVideoProfileLevelH264BaselineAutoLevel as String,
]
exporter.videoOutputConfiguration = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: NSNumber(integerLiteral: 1280),
AVVideoHeightKey: NSNumber(integerLiteral: 720),
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionDict
] …Run Code Online (Sandbox Code Playgroud) avfoundation ×10
ios ×6
iphone ×5
objective-c ×5
swift ×3
camera ×2
avkit ×1
calayer ×1
ios11 ×1
orientation ×1
overlay ×1
photo ×1
time ×1
video ×1