是编程颠倒可能的图像的颜色?

我想拍摄一张图片,并在iOS中反转颜色。

为了扩大quixoto的答案,并且因为我有一个自己的项目的相关源代码,如果你需要下降到on-CPU像素操作,那么我已经添加了博览会的以下应该做的诀窍:

@implementation UIImage (NegativeImage) - (UIImage *)negativeImage { // get width and height as integers, since we'll be using them as // array subscripts, etc, and this'll save a whole lot of casting CGSize size = self.size; int width = size.width; int height = size.height; // Create a suitable RGB+alpha bitmap context in BGRA colour space CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1); CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast); CGColorSpaceRelease(colourSpace); // draw the current image to the newly created context CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]); // run through every pixel, a scan line at a time... for(int y = 0; y < height; y++) { // get a pointer to the start of this scan line unsigned char *linePointer = &memoryPool[y * width * 4]; // step through the pixels one by one... for(int x = 0; x < width; x++) { // get RGB values. We're dealing with premultiplied alpha // here, so we need to divide by the alpha channel (if it // isn't zero, of course) to get uninflected RGB. We // multiply by 255 to keep precision while still using // integers int r, g, b; if(linePointer[3]) { r = linePointer[0] * 255 / linePointer[3]; g = linePointer[1] * 255 / linePointer[3]; b = linePointer[2] * 255 / linePointer[3]; } else r = g = b = 0; // perform the colour inversion r = 255 - r; g = 255 - g; b = 255 - b; // multiply by alpha again, divide by 255 to undo the // scaling before, store the new values and advance // the pointer we're reading pixel data from linePointer[0] = r * linePointer[3] / 255; linePointer[1] = g * linePointer[3] / 255; linePointer[2] = b * linePointer[3] / 255; linePointer += 4; } } // get a CG image from the context, wrap that into a // UIImage CGImageRef cgImage = CGBitmapContextCreateImage(context); UIImage *returnImage = [UIImage imageWithCGImage:cgImage]; // clean up CGImageRelease(cgImage); CGContextRelease(context); free(memoryPool); // and return return returnImage; } @end 

所以,添加一个类别的方法来UIImage:

  1. 创build一个清晰的CoreGraphics位图上下文,它可以访问内存
  2. 绘制UIImage它
  3. 贯穿每一个像素,从预乘alpha转换为不受偏转的RGB,分别反转每个通道,再乘以alpha并存回
  4. 从上下文获取图像,并将其包装到UIImage中
  5. 清理后,返回UIImage

使用CoreImage:

 #import <CoreImage/CoreImage.h> @implementation UIImage (ColorInverse) + (UIImage *)inverseColor:(UIImage *)image { CIImage *coreImage = [CIImage imageWithCGImage:image.CGImage]; CIFilter *filter = [CIFilter filterWithName:@"CIColorInvert"]; [filter setValue:coreImage forKey:kCIInputImageKey]; CIImage *result = [filter valueForKey:kCIOutputImageKey]; return [UIImage imageWithCIImage:result]; } @end 

当然,这是可能的 – 一种方法是使用“差异”混合模式( kCGBlendModeDifference )。 请参阅此问题 (其中包括)设置image processing的代码大纲。 使用您的图像作为底部(基本)的图像,然后在其上绘制一个纯白色的位图。

您也可以通过获取CGImageRef并将其绘制到位图上下文中,然后循环位图上下文中的像素来手动执行每像素操作。

汤米的答案是答案,但我想指出,对于更大的图像来说,这可能是一个非常紧张和耗时的任务。 有两个框架可以帮助你操作图像:

  1. CoreImage
  2. 加速器

    值得一提的是来自Brad Larson的令人惊叹的GPUImage框架,GPUImage在OpenGlES 2.0环境下使用自定义片段着色器在GPU上运行例程,速度显着提升。 使用CoreImge如果有负滤波器可用,您可以selectCPU或GPU,使用加速器在CPU上运行的所有例程,但使用向量mathimage processing。

创build了一个快速的扩展来做到这一点。 另外,因为基于CIImage的UIImages崩溃(大多数库假设CGImage被设置)我添加了一个选项来返回一个基于修改后的CIImage的UIImage:

 extension UIImage { func inverseImage(cgResult: Bool) -> UIImage? { let coreImage = UIKit.CIImage(image: self) guard let filter = CIFilter(name: "CIColorInvert") else { return nil } filter.setValue(coreImage, forKey: kCIInputImageKey) guard let result = filter.valueForKey(kCIOutputImageKey) as? UIKit.CIImage else { return nil } if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly return UIImage(CGImage: CIContext(options: nil).createCGImage(result, fromRect: result.extent)) } return UIImage(CIImage: result) } } 

Swift 3更新:( 从@BadPirate答案)

 extension UIImage { func inverseImage(cgResult: Bool) -> UIImage? { let coreImage = UIKit.CIImage(image: self) guard let filter = CIFilter(name: "CIColorInvert") else { return nil } filter.setValue(coreImage, forKey: kCIInputImageKey) guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil } if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!) } return UIImage(ciImage: result) } }