标签: avcaptureoutput

如何使用AVCaptureVideoDataOutput录制视频

我使用AVCaptureSession来获取相机输出并成功添加了音频和视频输入和输出.

{

    var captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) as AVCaptureDevice

    var error: NSError? = nil

    do {

        //remove the previous inputs
        let inputs = cameraSession.inputs as! [AVCaptureDeviceInput]
        for oldInput:AVCaptureDeviceInput in inputs {
            cameraSession.removeInput(oldInput)
        }
        cameraSession.beginConfiguration()

        if cameraPosition.isEqualToString("Front") {
            captureDevice = cameraWithPosition(.Front)!
        }
        else {
            captureDevice = cameraWithPosition(.Back)!
        }

        let deviceInput = try AVCaptureDeviceInput(device: captureDevice)

        if (cameraSession.canAddInput(deviceInput) == true) {
            cameraSession.addInput(deviceInput)
        }

        let dataOutput = AVCaptureVideoDataOutput()
        dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(unsignedInt: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
        dataOutput.alwaysDiscardsLateVideoFrames = true

        if (cameraSession.canAddOutput(dataOutput) == true) {
            cameraSession.addOutput(dataOutput) …
Run Code Online (Sandbox Code Playgroud)

video avfoundation ios swift avcaptureoutput

10
推荐指数
1
解决办法
3661
查看次数

iOS:__connection_block_invoke_2中的错误:连接中断

控制台中的Xcode/iOS 8/AVFoundation相关错误:

error in __connection_block_invoke_2: Connection interrupted
Run Code Online (Sandbox Code Playgroud)

我只是将AVCaptureVideoDataOutput添加到Apple的示例应用程序'AVCamManualUsingtheManualCaptureAPI'

我添加的是:

    // CoreImage wants BGRA pixel format
    NSDictionary *outputSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInteger:kCVPixelFormatType_32BGRA]};

    // create and configure video data output
    AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    videoDataOutput.videoSettings = outputSettings;
    videoDataOutput.alwaysDiscardsLateVideoFrames = YES;
    [videoDataOutput setSampleBufferDelegate:self queue:sessionQueue];
Run Code Online (Sandbox Code Playgroud)

上面插入到示例项目的代码段:

- (void)viewDidLoad
{
    [super viewDidLoad];

    self.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;

    self.recordButton.layer.cornerRadius = self.stillButton.layer.cornerRadius = self.cameraButton.layer.cornerRadius = 4;
    self.recordButton.clipsToBounds = self.stillButton.clipsToBounds = self.cameraButton.clipsToBounds = YES;

    // Create the AVCaptureSession
    AVCaptureSession *session = [[AVCaptureSession alloc] init]; …
Run Code Online (Sandbox Code Playgroud)

xcode multithreading avfoundation ios avcaptureoutput

9
推荐指数
1
解决办法
1万
查看次数

类型'OSType'不符合Swift 2.0中的协议'AnyObject'

我刚刚使用Swift 2.0更新到Xcode 7 beta.当我将我的项目更新到Swift 2.0时,我得到了这个错误:"类型'OSType'不符合Swift 2.0中的协议'AnyObject'".我的项目在Swift 1.2中完美运行.这是代码得到错误:

videoDataOutput = AVCaptureVideoDataOutput()
        // create a queue to run the capture on
        var captureQueue=dispatch_queue_create("catpureQueue", nil);
        videoDataOutput?.setSampleBufferDelegate(self, queue: captureQueue)

        // configure the pixel format            
        **videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]** // ERROR here!

        if captureSession!.canAddOutput(videoDataOutput) {
            captureSession!.addOutput(videoDataOutput)
        }
Run Code Online (Sandbox Code Playgroud)

我试图将kCVPixelFormatType_32BGRA转换为AnyObject,但它不起作用.有人可以帮帮我吗?对不起,我的英语不好!谢谢!

swift avcaptureoutput xcode7 swift2

9
推荐指数
1
解决办法
2336
查看次数

如何在更改AVCaptureOutput时避免AVCaptureVideoPreviewLayer闪烁

在此输入图像描述

我有一个正在运行的会话和一个在我的视图中显示的预览图层.

我需要在AVCaptureStillImageOutput,AVCaptureMetadataOutput和AVCaptureVideoDataOutput之间在我的应用程序中多次更改输出,而我的预览应该流畅地显示而不会闪烁.

问题:当我向此会话添加输出时,预览会闪烁(请找到我附加的gif).

具体的行导致问题:

self.stillImageOutput = AVCaptureStillImageOutput()
self.stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session.canAddOutput(self.stillImageOutput) {
    session.addOutput(self.stillImageOutput)
}
Run Code Online (Sandbox Code Playgroud)

我的问题:如何在将输出添加到正在运行的会话时避免AVCaptureVideoPreviewLayer闪烁?

avfoundation ios avcapturesession avcaptureoutput

5
推荐指数
1
解决办法
549
查看次数

AVFoundation captureOutput didOutputSampleBuffer 延迟

我正在使用AVFoundation captureOutput didOutputSampleBuffer提取图像,然后用于过滤器。

\n\n
  self.bufferFrameQueue = DispatchQueue(label: "bufferFrame queue", qos: DispatchQoS.background,  attributes: [], autoreleaseFrequency: .inherit)\n\n  self.videoDataOutput = AVCaptureVideoDataOutput()\n  if self.session.canAddOutput(self.videoDataOutput) {\n       self.session.addOutput(videoDataOutput)\n       self.videoDataOutput!.alwaysDiscardsLateVideoFrames = true\n       self.videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]\n           self.videoDataOutput!.setSampleBufferDelegate(self, queue: self.bufferFrameQueue)\n  }\n\n\n\n func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {\n\n        connection.videoOrientation = .portrait\n\n        let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!\n        let ciImage = CIImage(cvPixelBuffer: pixelBuffer)\n\n        DispatchQueue.main.async {\n            self.cameraBufferImage = ciImage\n        }\n}\n
Run Code Online (Sandbox Code Playgroud)\n\n

上面只是更新了self.cameraBufferImage上面只是在有新的输出样本缓冲区时

\n\n

然后,当按下滤镜按钮时,我使用self.cameraBufferImage如下:

\n\n
  func filterButtonPressed() {\n\n    if var …
Run Code Online (Sandbox Code Playgroud)

avfoundation ios cmsamplebuffer swift avcaptureoutput

5
推荐指数
0
解决办法
1750
查看次数

如何将CVImageBuffer转换为UIImage?

我有tmpPixelBuffer像素缓冲区数据的临时变量nil,但是,当检测到元数据对象时,我想从该缓冲区创建图像,因此我可以从该图像中裁剪元数据图像.

图像总是nil,我做错了什么?

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    tmpPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
}


func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {

    let image = CIImage(CVPixelBuffer: tmpPixelBuffer)
    let context = CIContext()
    let cgiImage = context.createCGImage(image, fromRect: image.extent())
    let capturedImage = UIImage(CGImage: cgiImage)
    ...
}
Run Code Online (Sandbox Code Playgroud)

我也尝试这样做:

func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {

    let image = CIImage(CVPixelBuffer: tmpPixelBuffer)
    let context = CIContext(options: nil)

    let cgiImage = context.createCGImage(image, fromRect: …
Run Code Online (Sandbox Code Playgroud)

core-image ios swift cvpixelbuffer avcaptureoutput

4
推荐指数
3
解决办法
4047
查看次数

将数据从 ViewController 传递到 Representable SwiftUI

我正在做一个对象检测并用于UIViewControllerRepresentable添加我的视图控制器。问题是我无法将数据从ViewController我的 SwiftUI 视图传递。我可以打印它。

有人能帮我吗?这是我的代码:

//

import SwiftUI
import AVKit
import UIKit
import Vision
let SVWidth = UIScreen.main.bounds.width

struct MaskDetectionView: View {
    let hasMaskColor = Color.green
    let noMaskColor = Color.red
    let shadowColor = Color.gray
    
    var body: some View {
        VStack(alignment: .center) {
            VStack(alignment: .center) {
                Text("Please place your head inside the bounded box.")
                    .font(.system(size: 15, weight: .regular, design: .default))
                Text("For better result, show your entire face.")
                    .font(.system(size: 15, weight: .regular, design: .default))
            }.padding(.top, 10)
            
            VStack(alignment: .center) { …
Run Code Online (Sandbox Code Playgroud)

viewcontroller ios swift avcaptureoutput swiftui

4
推荐指数
1
解决办法
2754
查看次数

iPhone 7+, ios 11.2: Depth data delivery is not supported in the current configuration

This bug is driving me mad. I'm trying to produce the absolute minimal code to get AVDepthData from an iPhone 7+ using its DualCam.

I have this code:


//
//  RecorderViewController.swift
//  ios-recorder-app


import UIKit
import AVFoundation


class RecorderViewController: UIViewController {

    @IBOutlet weak var previewView: UIView!

    @IBAction func onTapTakePhoto(_ sender: Any) {

        guard let capturePhotoOutput = self.capturePhotoOutput else { return }

        let photoSettings = AVCapturePhotoSettings()

        photoSettings.isDepthDataDeliveryEnabled = true //Error

        capturePhotoOutput.capturePhoto(with: photoSettings, delegate: self)

    }

    var session: AVCaptureSession?
    var videoPreviewLayer: AVCaptureVideoPreviewLayer? …
Run Code Online (Sandbox Code Playgroud)

ios avcapturesession avcapturedevice swift avcaptureoutput

1
推荐指数
1
解决办法
485
查看次数