Geo*_*ver 15 avcapturesession swift
我需要一些关于如何在不使用UIImagePicker的情况下捕获视频的指导.视频需要在按钮单击时启动和停止,然后将此数据保存到NSDocumentDirectory.我是swift的新手,所以任何帮助都会有用.
我需要帮助的代码部分是启动和停止视频会话并将其转换为数据.我创建了一个运行captureStillImageAsynchronouslyFromConnection的图片,并将此数据保存到NSDocumentDirectory.我已经设置了一个视频捕获会话,并准备好代码保存数据,但不知道如何从会话中获取数据.
var previewLayer : AVCaptureVideoPreviewLayer?
var captureDevice : AVCaptureDevice?
var videoCaptureOutput = AVCaptureVideoDataOutput()
let captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPreset640x480
let devices = AVCaptureDevice.devices()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if device.position == AVCaptureDevicePosition.Back {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
}
func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("Error: \(err?.localizedDescription)")
}
videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true
captureSession.addOutput(videoCaptureOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = CGRectMake(0, 0, screenWidth, screenHeight)
captureSession.startRunning()
var startVideoBtn = UIButton(frame: CGRectMake(0, screenHeight/2, screenWidth, screenHeight/2))
startVideoBtn.addTarget(self, action: "startVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(startVideoBtn)
var stopVideoBtn = UIButton(frame: CGRectMake(0, 0, screenWidth, screenHeight/2))
stopVideoBtn.addTarget(self, action: "stopVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(stopVideoBtn)
}
Run Code Online (Sandbox Code Playgroud)
如果需要,我可以提供更多代码或解释.
lee*_*sky 15
为获得最佳效果,请阅读AV Foundation编程指南中的静止和视频媒体捕获部分.
要处理来自的帧AVCaptureVideoDataOutput
,您需要一个采用该AVCaptureVideoDataOutputSampleBufferDelegate
协议的委托.captureOutput
只要写入新帧,就会调用委托的方法.设置输出的委托时,还必须提供应在其上调用回调的队列.它看起来像这样:
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
videoCaptureOutput.setSampleBufferDelegate(myDelegate, queue: cameraQueue)
captureSession.addOutput(videoCaptureOutput)
Run Code Online (Sandbox Code Playgroud)
注意:如果您只想将电影保存到文件中,您可能更喜欢使用AVCaptureMovieFileOutput
该类而不是AVCaptureVideoDataOutput
.在这种情况下,您将不需要队列.但是你仍然需要一个代表,这次采用的是AVCaptureFileOutputRecordingDelegate
协议.(相关方法仍然被调用captureOutput
.)
以下是AVCaptureMovieFileOutput
与上述指南相关的部分摘录:
开始录制
您开始使用录制QuickTime电影
startRecordingToOutputFileURL:recordingDelegate:
.您需要提供基于文件的URL和委托.URL不得标识现有文件,因为电影文件输出不会覆盖现有资源.您还必须具有写入指定位置的权限.委托必须符合AVCaptureFileOutputRecordingDelegate
协议,并且必须实现该captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:
方法.Run Code Online (Sandbox Code Playgroud)AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; NSURL *fileURL = <#A file URL that identifies the output location#>; [aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
在实现中
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:
,委托可能会将生成的影片写入Camera Roll相册.它还应检查可能发生的任何错误.
归档时间: |
|
查看次数: |
11480 次 |
最近记录: |