我的应用只是风景.我将这样呈现AVCaptureVideoPreviewLayer:
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[self.previewLayer setBackgroundColor:[[UIColor blackColor] CGColor]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
NSLog(@"previewView: %@", self.previewView);
CALayer *rootLayer = [self.previewView layer];
[rootLayer setMasksToBounds:YES];
[self.previewLayer setFrame:[rootLayer bounds]];
NSLog(@"previewlayer: %f, %f, %f, %f", self.previewLayer.frame.origin.x, self.previewLayer.frame.origin.y, self.previewLayer.frame.size.width, self.previewLayer.frame.size.height);
[rootLayer addSublayer:self.previewLayer];
[session startRunning];
Run Code Online (Sandbox Code Playgroud)
self.previewView的框架为(0,0,568,320),这是正确的.self.previewLayer记录一个(0,0,568,320)的帧,这在理论上是正确的.但是,相机显示在横向屏幕中间显示为纵向矩形,并且相机预览图像的方向错误90度.我究竟做错了什么?我需要相机预览层出现在全屏幕,在横向模式下,图像应正确导向.
我有一个简单的AVCaptureSession运行,以在我的应用程序中获取相机并拍照.如何使用UIGestureRecognizer
相机实现"捏缩放"功能?
什么是使用设置找到要捕获的图像分辨率的最佳方法AVCaptureSessionPresetPhoto
.
我想在捕获图像之前找到分辨率.
我想在两个相互靠近的UIViews中显示iPad2的前置和后置摄像头的流.要流式传输一个设备的图像,我使用以下代码
AVCaptureDeviceInput *captureInputFront = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session addInput:captureInputFront];
session setSessionPreset:AVCaptureSessionPresetMedium];
session startRunning];
AVCaptureVideoPreviewLayer *prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
prevLayer.frame = self.view.frame;
[self.view.layer addSublayer:prevLayer];
Run Code Online (Sandbox Code Playgroud)
这适用于任何一台相机.为了并行显示流,我尝试创建另一个会话,但是第二个会话建立后,第一个会话冻结.
然后我尝试向会话添加两个AVCaptureDeviceInput,但目前最多支持一个输入.
任何有用的想法如何从两个相机流?
我读了大约一百万个线程,关于如何让一个VideoPreviewLayer填满iPhone的整个屏幕,但没有任何作用......也许你可以帮助我,因为我真的被卡住了.
这是我的预览层init:
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
// Choosing bigger preset for bigger screen.
_sessionPreset = AVCaptureSessionPreset1280x720;
}
else
{
_sessionPreset = AVCaptureSessionPresetHigh;
}
[self setupAVCapture];
AVCaptureSession *captureSession = _session;
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = self.view;
previewLayer.frame = aView.bounds;
previewLayer.connection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
[aView.layer addSublayer:previewLayer];
Run Code Online (Sandbox Code Playgroud)
这是我的setupAvCapture方法:
//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
//-- Set preset session size.
[_session setSessionPreset:_sessionPreset];
//-- Creata a video device and input from that Device. Add the input to …
Run Code Online (Sandbox Code Playgroud) 嗨我想设置AV
捕获会话以使用iphone相机捕获具有特定分辨率(如果可能,具有特定质量)的图像.这是setupping AV
会话代码
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
self.captureSession = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
NSArray *cameras=[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device;
if ([UserDefaults camera]==UIImagePickerControllerCameraDeviceFront)
{
device =[cameras objectAtIndex:1];
}
else …
Run Code Online (Sandbox Code Playgroud) 我正在通过这个视频来制作自定义相机视图. https://www.youtube.com/watch?v=w0O3ZGUS3pk
但是由于iOS 10和swift 3的变化,很多东西都不再相关了
以下是我将弃用函数更改为新函数后得到的代码.但是没有错误,但也没有在UIView上看到预览
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCapturePhotoCaptureDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var cameraView: UIView!
var captureSession = AVCaptureSession();
var sessionOutput = AVCapturePhotoOutput();
var sessionOutputSetting = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecJPEG]);
var previewLayer = AVCaptureVideoPreviewLayer();
override func viewWillAppear(_ animated: Bool) {
let deviceDiscoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInDuoCamera, AVCaptureDeviceType.builtInTelephotoCamera,AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.unspecified)
for device in (deviceDiscoverySession?.devices)! {
if(device.position == AVCaptureDevicePosition.front){
do{
let input = try AVCaptureDeviceInput(device: device)
if(captureSession.canAddInput(input)){
captureSession.addInput(input);
if(captureSession.canAddOutput(sessionOutput)){
captureSession.addOutput(sessionOutput);
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession);
previewLayer.videoGravity = …
Run Code Online (Sandbox Code Playgroud) 我想在我的应用程序中实现自定义相机.所以,我正在创建这个相机AVCaptureDevice
.
现在我只想在我的自定义相机中显示灰度输出.所以我试图用这个setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:
和AVCaptureWhiteBalanceGains
.我正在使用AVCamManual:将AVCam扩展为使用手动捕获.
- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
NSError *error = nil;
if ( [videoDevice lockForConfiguration:&error] ) {
AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits
[videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
[videoDevice unlockForConfiguration];
}
else {
NSLog( @"Could not lock device for configuration: %@", error );
}
}
Run Code Online (Sandbox Code Playgroud)
但为此,我必须将RGB增益值传递到1到4之间.所以我创建了这个方法来检查MAX和MIN值.
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = MAX( 1.0, g.redGain );
g.greenGain = MAX( 1.0, g.greenGain ); …
Run Code Online (Sandbox Code Playgroud) 我正在使用AVCaptureDevice.setTorchModeOn(level)
方法以可变亮度打开手电筒。
在我的旧 iPhone SE 上,它运行良好——当我level
从0
变为 时,我可以清楚地看到 4 种不同的亮度级别1
。
但在 iPhone 11 Pro 上,手电筒仅在水平为1.0
! 如果远离最高水平,它的亮度(与控制中心的手电筒相比)。
我尝试使用maxAvailableTorchLevel
常量,但结果与使用1.0
.
还尝试了超过的值1.0
- 这会导致异常(如预期的那样)。
有人也有这个问题吗?也许有一些解决方法?
我AVCaptureVideoPreviewLayer
用来传递实时视频并openGL
实时应用着色器.使用前置摄像头时,视频会被镜像,我想在应用着色器之前取消镜像.
任何人都可以帮助吗?
补充:切换到前置摄像头的代码:
-(void)showFrontCamera{
NSLog(@"inside showFrontCamera");
[captureSession removeInput:videoInput];
// Grab the front-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
backFacingCamera = device;
}
}
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput]) {
[captureSession addInput:videoInput];
}
}
Run Code Online (Sandbox Code Playgroud) avcapture ×10
ios ×7
objective-c ×5
avfoundation ×4
iphone ×4
swift ×2
flashlight ×1
ios10 ×1
ipad-2 ×1
orientation ×1