x衍生Sobel就是这样的:
-1 0 +1
-2 0 +2
-1 0 +1
Run Code Online (Sandbox Code Playgroud)
假设我的图像中有两个样本看起来像那样(0 =黑色,1 =白色):
0 0 1 1 0 0
0 0 1 & 1 0 0
0 0 1 1 0 0
Run Code Online (Sandbox Code Playgroud)
如果我执行卷积,我将分别以4和-4结束.
所以我的自然反应是将结果标准化为8并将其翻译为0.5 - 这是正确的吗?(我想知道找不到维基百科等提及任何规范化)
编辑: 我使用Sobel滤镜创建2D结构张量(衍生物dX和dY):
A B
Structure Tensor = C D
with A = dx^2
B = dx*dy
C = dx*dy
D = dy^2
Run Code Online (Sandbox Code Playgroud)
最后我想将结果存储在[0,1]中,但是现在我只是想知道我是否必须规范化Sobel结果(默认情况下,不仅仅是为了存储它),即:
A = dx*dx
//OR
A = (dx/8.0)*(dx/8.0)
//OR
A = (dx/8.0+0.5)*(dx/8.0+0.5)
Run Code Online (Sandbox Code Playgroud) 我正在使用 CIFilter 和 CIHueBlendMode 来混合图像(前景)和红色层(背景)
我在 Photoshop CS6 中使用 Hue Blend Mode 做同样的事情(复制前景图像并使用相同的红色填充背景层)
不幸的是,结果非常不同:
(同样适用于比较 CIColorBlendMode、CIDifferenceBlendMode 和 CISaturationBlendMode 与其对应的 Photoshop)
我的问题是:是我吗?我在这里做错了吗?还是核心图像混合模式和 Photoshop 混合模式完全不同?
// Blending the input image with a red image
CIFilter* composite = [CIFilter filterWithName:@"CIHueBlendMode"];
[composite setValue:inputImage forKey:@"inputImage"];
[composite setValue:redImage forKey:@"inputBackgroundImage"];
CIImage *outputImage = [composite outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
imageView.image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
// This is how I create the red image:
- (CIImage *)imageWithColor:(UIColor *)color inRect:(CGRect)rect
{
UIGraphicsBeginImageContext(rect.size);
CGContextRef _context = UIGraphicsGetCurrentContext(); …Run Code Online (Sandbox Code Playgroud) 是否有经验法则或数学方程式告诉我对于某个西格玛,我的(一维离散)高斯核应该有多宽?
比方说,我选择 1.87 的 sigma,我的内核应该是 5 个值/步长/像素宽,7 或 25,以便执行标准化图像平滑?谢谢你。
在将Objective-C代码迁移到ARC时,我无法实现NSFastEnumeration协议.有人能告诉我,如何摆脱以下warnig(参见代码片段)?提前致谢.
// I changed it due to ARC, was before
// - (NSUInteger) countByEnumeratingWithState: (NSFastEnumerationState*) state objects: (id*) stackbuf count: (NSUInteger) len
- (NSUInteger) countByEnumeratingWithState: (NSFastEnumerationState*) state objects: (__unsafe_unretained id *) stackbuf count: (NSUInteger) len
{
...
*stackbuf = [[ZBarSymbol alloc] initWithSymbol: sym]; //Warning: Assigning retained object to unsafe_unretained variable; object will be released after assignment
...
}
- (id) initWithSymbol: (const zbar_symbol_t*) sym
{
if(self = [super init]) {
...
}
return(self);
}
Run Code Online (Sandbox Code Playgroud) 我尝试从视频中读取一些帧。视频是 640 x 480,但我得到的图像只有 480 x 360。有没有办法获得原始尺寸的图像?
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString * mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:(NSString *)kUTTypeMovie])
[self readMovieFrames:[info objectForKey:UIImagePickerControllerMediaURL]];
[self dismissViewControllerAnimated:YES completion:nil];
}
- (void)readMovieFrames:(NSURL *)url
{
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL:url];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:playerItem.asset];
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero;
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero;
imageGenerator.appliesPreferredTrackTransform = YES;
//…
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:requestTime actualTime:&actualTime error:&error];
UIImage *img = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
CGImageRelease(imageRef);
}
Run Code Online (Sandbox Code Playgroud) objective-c ×2
avfoundation ×1
cifilter ×1
core-image ×1
gaussian ×1
ios ×1
photoshop ×1
xcode ×1