iOS Tesseract OCR图像准备

我想实现一个OCR应用程序,可以识别来自照片的文字。

我成功地编译和集成了iOS中的Tesseract引擎,我成功地在拍摄清晰的文档(或者从屏幕上拍摄这个文本的照片)时获得了合理的检测,但是对于其他文本,例如路标,商店标志,颜色背景,检测失败。

问题是需要什么样的image processing准备才能获得更好的识别。 例如,我期望我们需要将图像转换为灰度/黑白以及固定对比度等。

这怎么可以在iOS中完成,是否有一个包?

我目前正在做同样的事情。 我发现一个保存在Photoshop中的PNG效果很好,但是最初来自相机的图像然后导入到应用程序中却没有奏效。 不要求我解释它 – 但应用这个function使这些图像的工作。 也许它也适用于你。

// this does the trick to have tesseract accept the UIImage. UIImage * gs_convert_image (UIImage * src_img) { CGColorSpaceRef d_colorSpace = CGColorSpaceCreateDeviceRGB(); /* * Note we specify 4 bytes per pixel here even though we ignore the * alpha value; you can't specify 3 bytes per-pixel. */ size_t d_bytesPerRow = src_img.size.width * 4; unsigned char * imgData = (unsigned char*)malloc(src_img.size.height*d_bytesPerRow); CGContextRef context = CGBitmapContextCreate(imgData, src_img.size.width, src_img.size.height, 8, d_bytesPerRow, d_colorSpace, kCGImageAlphaNoneSkipFirst); UIGraphicsPushContext(context); // These next two lines 'flip' the drawing so it doesn't appear upside-down. CGContextTranslateCTM(context, 0.0, src_img.size.height); CGContextScaleCTM(context, 1.0, -1.0); // Use UIImage's drawInRect: instead of the CGContextDrawImage function, otherwise you'll have issues when the source image is in portrait orientation. [src_img drawInRect:CGRectMake(0.0, 0.0, src_img.size.width, src_img.size.height)]; UIGraphicsPopContext(); /* * At this point, we have the raw ARGB pixel data in the imgData buffer, so * we can perform whatever image processing here. */ // After we've processed the raw data, turn it back into a UIImage instance. CGImageRef new_img = CGBitmapContextCreateImage(context); UIImage * convertedImage = [[UIImage alloc] initWithCGImage: new_img]; CGImageRelease(new_img); CGContextRelease(context); CGColorSpaceRelease(d_colorSpace); free(imgData); return convertedImage; } 

我也做了很多实验,准备tesseract图像。 resize,转换为灰度,然后调整亮度和对比度似乎效果最好。

我也试过这个GPUImage库。 https://github.com/BradLarson/GPUImage和GPUImageAverageLuminanceThresholdFilter似乎给了我一个很好的调整图像,但tesseract似乎并不能很好地与它。

我也把opencv放到我的项目中,并计划尝试它的图像例程。 可能甚至一些盒子检测,以find文本区域(我希望这将加快tesseract)。

我已经使用了上面的代码,但也添加了两个其他的函数调用,以转换图像,以便它将与Tesseract一起使用。

首先,我使用了一个图像resize脚本来转换为640 x 640,这对于Tesseract来说似乎更易于pipe理。

 -(UIImage *)resizeImage:(UIImage *)image { CGImageRef imageRef = [image CGImage]; CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGColorSpaceCreateDeviceRGB(); if (alphaInfo == kCGImageAlphaNone) alphaInfo = kCGImageAlphaNoneSkipLast; int width, height; width = 640;//[image size].width; height = 640;//[image size].height; CGContextRef bitmap; if (image.imageOrientation == UIImageOrientationUp | image.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, alphaInfo); } else { bitmap = CGBitmapContextCreate(NULL, height, width, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, alphaInfo); } if (image.imageOrientation == UIImageOrientationLeft) { NSLog(@"image orientation left"); CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -height); } else if (image.imageOrientation == UIImageOrientationRight) { NSLog(@"image orientation right"); CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -width, 0); } else if (image.imageOrientation == UIImageOrientationUp) { NSLog(@"image orientation up"); } else if (image.imageOrientation == UIImageOrientationDown) { NSLog(@"image orientation down"); CGContextTranslateCTM (bitmap, width,height); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage *result = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return result; } 

所以弧度的工作确保你在@implementation之上声明它

 static inline double radians (double degrees) {return degrees * M_PI/180;} 

然后我转换成灰度。

我发现这篇文章将图像转换为灰度转换为灰度。

我已经成功地使用了这里的代码,现在可以读取不同的颜色文本和不同的颜色背景

我已经稍微修改了代码,将其作为一个类中的函数,而不是其他人的类

 - (UIImage *) toGrayscale:(UIImage*)img { const int RED = 1; const int GREEN = 2; const int BLUE = 3; // Create image rectangle with current image width/height CGRect imageRect = CGRectMake(0, 0, img.size.width * img.scale, img.size.height * img.scale); int width = imageRect.size.width; int height = imageRect.size.height; // the pixels will be painted to this array uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t)); // clear the pixels so any transparency is preserved memset(pixels, 0, width * height * sizeof(uint32_t)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // create a context with RGBA pixels CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast); // paint the bitmap to our context which will fill in the pixels array CGContextDrawImage(context, CGRectMake(0, 0, width, height), [img CGImage]); for(int y = 0; y < height; y++) { for(int x = 0; x < width; x++) { uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x]; // convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE]; // set the pixels to gray rgbaPixel[RED] = gray; rgbaPixel[GREEN] = gray; rgbaPixel[BLUE] = gray; } } // create a new CGImageRef from our context with the modified pixels CGImageRef image = CGBitmapContextCreateImage(context); // we're done with the context, color space, and pixels CGContextRelease(context); CGColorSpaceRelease(colorSpace); free(pixels); // make a new UIImage to return UIImage *resultUIImage = [UIImage imageWithCGImage:image scale:img.scale orientation:UIImageOrientationUp]; // we're done with image now too CGImageRelease(image); return resultUIImage; }