目前我正在使用CIDetector来检测UIImage中的矩形.我正在做的建议方法是将坐标传递给过滤器以获取CIImage以放置所采用的UIImage.它看起来像这样:
func performRectangleDetection(image: UIKit.CIImage) -> UIKit.CIImage? {
var resultImage: UIKit.CIImage?
let detector:CIDetector = CIDetector(ofType: CIDetectorTypeRectangle, context: nil, options: [CIDetectorAccuracy : CIDetectorAccuracyHigh])
// Get the detections
let features = detector.featuresInImage(image)
for feature in features as! [CIRectangleFeature] {
resultImage = self.drawHighlightOverlayForPoints(image, topLeft: feature.topLeft, topRight: feature.topRight,
bottomLeft: feature.bottomLeft, bottomRight: feature.bottomRight)
}
return resultImage
}
func drawHighlightOverlayForPoints(image: UIKit.CIImage, topLeft: CGPoint, topRight: CGPoint,
bottomLeft: CGPoint, bottomRight: CGPoint) -> UIKit.CIImage {
var overlay = UIKit.CIImage(color: CIColor(red: 1.0, green: 0.55, blue: 0.0, alpha: 0.45))
overlay = overlay.imageByCroppingToRect(image.extent) …
Run Code Online (Sandbox Code Playgroud) 我正在开发一个检测身份证的应用程序,我正在尝试使用ios内置的CIDetector来检测实时预览中的矩形形状对象.我在这里使用CoreImage Detectors中解释的解决方案
我得到了流动的结果 图像
我的问题:有没有办法提取和裁剪检测到的矩形?
我正在尝试在我的iOS相机应用程序中检测面部,但它无法正常工作,而它在Camera.app中正常工作.请注意:
为什么会这样?
我的代码如下.你觉得有什么不对吗?
首先,我创建一个视频输出如下:
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings =
[kCVPixelBufferPixelFormatTypeKey as AnyHashable:
Int(kCMPixelFormat_32BGRA)]
session.addOutput(videoOutput)
videoOutput.setSampleBufferDelegate(faceDetector, queue: faceDetectionQueue)
Run Code Online (Sandbox Code Playgroud)
这是代表:
class FaceDetector: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let features = FaceDetector.ciDetector.features(
in: CIImage(cvPixelBuffer: imageBuffer))
let faces = features.map { $0.bounds }
let imageSize = CVImageBufferGetDisplaySize(imageBuffer)
let faceBounds = faces.map { (face: CIFeature) -> CGRect in
var …
Run Code Online (Sandbox Code Playgroud) 我已经CIDetector
在我的应用程序中实现了一个检测图像上的矩形,但现在我如何使用返回CGPoint
的图像来裁剪图像,以便我可以显示它?
从视角我尝试应用CIPerspectiveCorrection过滤器,但无法使其工作.
我已经四处寻找并发现了一些线索,但在Swift中找不到解决方案.
如何使用CIDetector
(检测到的矩形)提供的数据来修复透视并裁剪图像?
对于那些可能不熟悉CIDetectorTypeRectangle
返回内容的人:它返回4 CGPoint
的bottomLeft,bottomRight,topLeft,topRight.
人脸检测完成后内存不会释放,有没有办法可以释放它(过程完成后内存保持在300MB)。
autoreleasepool{
manager.requestImageData(for: asset, options: option){
(data, responseString, imageOriet, info) in
if (data != nil){
//let faces = (faceDetector?.features(in: CIImage(data: data!)!))
guard let faces = self.faceDetector?.features(in: CIImage(data: data!)!) else {
return
}
completionHandler((faces.count))
}else{
print(info)
}
}
}
Run Code Online (Sandbox Code Playgroud)
我想从实时摄像头和图像中扫描二维码和条形码。我之前使用ZBar库来扫码。它不扫描特定类型的二维码和条形码。此外,苹果的 AVFoundation 框架在从实时摄像头扫描代码时似乎更加快速和准确。
所以我不想使用ZBar。为了扫描从图库中选取的图像中的代码,我使用 CIDetector。但 CIDetector 似乎无法扫描图像中的条形码。我已经在流 CIDetector 的整个堆栈中搜索了其他条形码类型, 本机从 UIImage 扫描条形码(即不使用 ZBar)
但我还没有找到使用 CIDetector 扫描从图库中挑选的图像条形码的方法。是否可以使用 CIDetector 从 UIImages 扫描条形码?
不建议其他第三方库。我想使用苹果的默认框架来完成这项工作。
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary<NSString *,id> *)info
{
[picker dismissViewControllerAnimated:YES completion:nil];
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
CIImage *img = [[CIImage alloc]initWithImage:image];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
if (detector)
{
NSArray* featuresR = [detector featuresInImage:img];
NSString* decodeR;
for (CIQRCodeFeature* featureR in featuresR)
{
NSLog(@"decode %@ ",featureR.messageString);
decodeR = featureR.messageString;
[self showAlertWithTitle:@"Success" withMessage:decodeR];
return;
}
[self showAlertWithTitle:@"Error" withMessage:@"Invalid Image"]; …
Run Code Online (Sandbox Code Playgroud) 当使用CIDetectorTypeRectangle时,我只能(几乎)检测到我的相机图像中只有一个矩形,但我想检测多个矩形.它可以检测多个矩形,因为可以检测多个面?
所以我试图CIDetector
在Swift中使用文本检测器.当我将手机指向一段文字时,它没有检测到它.但是,如果我将手机转到一边,它可以正常工作并检测到文本.如何更改它以便以正确的相机方向检测文本?这是我的代码:
准备文本检测功能:
func prepareTextDetector() -> CIDetector {
let options: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh, CIDetectorAspectRatio: 1.0]
return CIDetector(ofType: CIDetectorTypeText, context: nil, options: options)
}
Run Code Online (Sandbox Code Playgroud)
文字检测功能:
func performTextDetection(image: CIImage) -> CIImage? {
if let detector = detector {
// Get the detections
let features = detector.featuresInImage(image)
for feature in features as! [CITextFeature] {
resultImage = drawHighlightOverlayForPoints(image, topLeft: feature.topLeft, topRight: feature.topRight,
bottomLeft: feature.bottomLeft, bottomRight: feature.bottomRight)
imagex = cropBusinessCardForPoints(resultImage!, topLeft: feature.topLeft, topRight: feature.topRight, bottomLeft: feature.bottomLeft, bottomRight: feature.bottomRight)
}
}
return resultImage …
Run Code Online (Sandbox Code Playgroud) 我想在面上的两个点集之间添加度量以将其用于数字图像中的对象检测,我们将其限制为二维,如下所示
我可以使用以下方法识别如下图所示的脸部特征:
-(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options: [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with …
Run Code Online (Sandbox Code Playgroud) cidetector ×9
ios ×7
swift ×4
camera ×2
objective-c ×2
avcapture ×1
avfoundation ×1
core-image ×1
detection ×1
image ×1
memory ×1
qr-code ×1
swift3 ×1
uibezierpath ×1
uiimage ×1