CIGaussianBlur图像大小

嘿想模糊我的看法,我用这个代码:

//Get a UIImage from the UIView NSLog(@"blur capture"); UIGraphicsBeginImageContext(BlurContrainerView.frame.size); [self.view.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); //Blur the UIImage CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage]; CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"]; [gaussianBlurFilter setValue:imageToBlur forKey: @"inputImage"]; [gaussianBlurFilter setValue:[NSNumber numberWithFloat: 5] forKey: @"inputRadius"]; //change number to increase/decrease blur CIImage *resultImage = [gaussianBlurFilter valueForKey: @"outputImage"]; //create UIImage from filtered image blurredImage = [[UIImage alloc] initWithCIImage:resultImage]; //Place the UIImage in a UIImageView UIImageView *newView = [[UIImageView alloc] initWithFrame:self.view.bounds]; newView.image = blurredImage; NSLog(@"%f,%f",newView.frame.size.width,newView.frame.size.height); //insert blur UIImageView below transparent view inside the blur image container [BlurContrainerView insertSubview:newView belowSubview:transparentView]; 

它模糊了观点,但不是全部。 我如何模糊所有的视图?

结算:postimg.org/image/9bee2e4zx/

问题不在于模糊了所有的图像,而在于模糊扩大了图像的边界,使图像变大,因此不能正确排列图像。

为了保持图像相同的大小,在行之后:

 CIImage *resultImage = [gaussianBlurFilter valueForKey: @"outputImage"]; 

你可以抓住一个矩形的CGRect ,在这个resultImage中间的原始图像的大小:

 // note, adjust rect because blur changed size of image CGRect rect = [resultImage extent]; rect.origin.x += (rect.size.width - viewImage.size.width ) / 2; rect.origin.y += (rect.size.height - viewImage.size.height) / 2; rect.size = viewImage.size; 

然后使用CIContext来抓取图像的这一部分:

 CIContext *context = [CIContext contextWithOptions:nil]; CGImageRef cgimg = [context createCGImage:resultImage fromRect:rect]; UIImage *blurredImage = [UIImage imageWithCGImage:cgimg]; CGImageRelease(cgimg); 

或者,对于iOS 7,如果您转到iOS UIImageEffects示例代码并下载iOS_UIImageEffects.zip ,则可以获取UIImage+ImageEffects类别。 无论如何,这提供了一些新的方法:

 - (UIImage *)applyLightEffect; - (UIImage *)applyExtraLightEffect; - (UIImage *)applyDarkEffect; - (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor; - (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage; 

所以,要模糊,成像和照亮(给予“磨砂玻璃”效果),你可以这样做:

 UIImage *newImage = [image applyLightEffect]; 

有趣的是,苹果的代码并不使用CIFilter ,而是调用vImage 高性能image processing框架的vImageBoxConvolve_ARGB8888 。 WWDC 2013video在iOS上实现使用UI 。

一个更快的解决scheme是完全避免CGImageRef,并在CIImage懒惰级别执行所有转换。

所以,而不是你不适合:

 // create UIImage from filtered image (but size is wrong) blurredImage = [[UIImage alloc] initWithCIImage:resultImage]; 

一个很好的解决办法是写:

Objective-C的

 // cropping rect because blur changed size of image CIImage *croppedImage = [resultImage imageByCroppingToRect:imageToBlur.extent]; // create UIImage from filtered cropped image blurredImage = [[UIImage alloc] initWithCIImage:croppedImage]; 

Swift 3

 // cropping rect because blur changed size of image let croppedImage = resultImage.cropping(to: imageToBlur.extent) // create UIImage from filtered cropped image let blurredImage = UIImage(ciImage: croppedImage) 

斯威夫特4

 // cropping rect because blur changed size of image let croppedImage = resultImage.cropped(to: imageToBlur.extent) // create UIImage from filtered cropped image let blurredImage = UIImage(ciImage: croppedImage) 

看起来像模糊filter给你一个比你开始的图像更大的图像,这是有道理的,因为在边缘的像素越来越模糊。 最简单的解决scheme可能是让newView使用UIViewContentModeCentercontentMode ,所以它不会试图newView模糊的图像; 您也可以通过在适当大小的新上下文中绘制blurredImage来裁剪blurredImage ,但是您并不需要。