iOS人脸检测器的方向和CIImage方向的设置

编辑发现这个代码,帮助前摄像头图像http://blog.logichigh.com/2008/06/05/uiimage-fix/

希望其他人也有类似的问题,可以帮助我。 尚未find解决scheme。 (这可能看起来有点长,但只是一堆帮手代码)

我在相机(正面和背面)获取的图像上使用了ios脸部检测器,以及图库中的图像(我正在使用UIImagePicker – 用于通过相机捕捉图像并从图库中select图像 – 不使用漫射用于在Squarecam演示中拍摄照片)

我真的搞砸了检测的坐标(如果有的话),所以我写了一个简短的debugging方法来获取面的边界以及一个实用工具,他们在上面绘制了一个正方形,我想检查检测器的方向正在工作:

 #define RECTBOX(R) [NSValue valueWithCGRect:R] - (NSArray *)detectFaces:(UIImage *)inputimage { _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\]; NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\]; CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\]; // try like this first // NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\]; // if not working go on to this (trying all orientations) NSArray* features; int exif; // ios face detector. trying all of the orientations for (exif = 1; exif <= 8 ; exif++) { NSNumber *orientation = \[NSNumber numberWithInt:exif\]; NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\]; NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\]; features = \[self.detector featuresInImage:ciimage options:imageOptions\]; if (features.count > 0) { NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\]; \[faceDetection log:str\]; break; } NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start; NSLog(@"faceDetection: facedetection total runtime is %fs",duration); } if (features.count > 0) { [faceDetection log:@"-I- Found faces with ios face detector"]; for(CIFaceFeature *feature in features) { CGRect rect = feature.bounds; CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height); [returnArray addObject:RECTBOX(r)]; } return returnArray; } else { // no faces from iOS face detector. try OpenCV detector } 

[1]

在尝试大量不同的照片后,我发现人脸检测器的方向与相机图像属性不一致。 我从前置摄像头拍了一大堆照片,其中的图像取向是3(查询图像的状态),但是人脸检测器没有find该设置的人脸。 当通过所有的exif可能性时,脸部检测器终于捡起脸部,但是一起进行不同的取向。

![1]: http : //i.stack.imgur.com/D7bkZ.jpg

我该如何解决这个问题? 我的代码有错吗?

另一个问题是,当脸部检测器捡起脸部时,(但是与脸部检测器紧密相连),但是对于“错误”的方向(大部分发生在前置相机上),最初使用的UIImage在uiiimageview中正确显示,但是当我画一个正方形覆盖(我在我的应用程序中使用opencv,所以我决定将UIImage转换为cvmat绘制覆盖与opencv)整个图像旋转90度(只有cvmat图像,而不是我最初显示的UIImage

我在这里可以想到的推理是面部检测器正在搞乱一些缓冲区(上下文?),以便将opencv mat的UIimage转换为正在使用的缓冲区。 我怎样才能分离这些缓冲区?

将uiimage转换成cvmat的代码是(来自“有名”的UIImage类别):

 -(cv::Mat)CVMat { CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage); CGFloat cols = self.size.width; CGFloat rows = self.size.height; cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage); CGContextRelease(contextRef); return cvMat; } - (id)initWithCVMat:(const cv::Mat&)cvMat { NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()]; CGColorSpaceRef colorSpace; if (cvMat.elemSize() == 1) { colorSpace = CGColorSpaceCreateDeviceGray(); } else { colorSpace = CGColorSpaceCreateDeviceRGB(); } CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width cvMat.rows, // Height 8, // Bits per component 8 * cvMat.elemSize(), // Bits per pixel cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags provider, // CGDataProviderRef NULL, // Decode false, // Should interpolate kCGRenderingIntentDefault); // Intent self = [self initWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); return self; } -(cv::Mat)CVRgbMat { cv::Mat tmpimage = self.CVMat; cv::Mat image; cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR); return image; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo { self.prevImage = img; // self.previewView.image = img; NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img]; for (id r in arr) { CGRect rect = RECTUNBOX(r); //self.previewView.image = img; self.previewView.image = [utils drawSquareOnImage:img square:rect]; } [self.imgPicker dismissModalViewControllerAnimated:YES]; return; } 

我不认为这是一个好主意,旋转整个图像像素的一堆,并匹配CIFaceFeature。 您可以想象在旋转的方向重绘非常重。 我有同样的问题,我通过转换CIFaceFeature相对于UIImageOrientation的坐标系来解决它。 我用一些转换方法扩展了CIFaceFeature类,以获得相对于UIImage及其UIImageView(或UIView的CALayer)的正确的点位置和边界。 完整的实现发布在这里: https : //gist.github.com/laoyang/5747004 。 你可以直接使用。

这里是从CIFaceFeature的一个点的最基本的转换,返回的CGPoint是基于图像的方向转换:

 - (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint { CGFloat imageWidth = image.size.width; CGFloat imageHeight = image.size.height; CGPoint convertedPoint; switch (image.imageOrientation) { case UIImageOrientationUp: convertedPoint.x = originalPoint.x; convertedPoint.y = imageHeight - originalPoint.y; break; case UIImageOrientationDown: convertedPoint.x = imageWidth - originalPoint.x; convertedPoint.y = originalPoint.y; break; case UIImageOrientationLeft: convertedPoint.x = imageWidth - originalPoint.y; convertedPoint.y = imageHeight - originalPoint.x; break; case UIImageOrientationRight: convertedPoint.x = originalPoint.y; convertedPoint.y = originalPoint.x; break; case UIImageOrientationUpMirrored: convertedPoint.x = imageWidth - originalPoint.x; convertedPoint.y = imageHeight - originalPoint.y; break; case UIImageOrientationDownMirrored: convertedPoint.x = originalPoint.x; convertedPoint.y = originalPoint.y; break; case UIImageOrientationLeftMirrored: convertedPoint.x = imageWidth - originalPoint.y; convertedPoint.y = originalPoint.x; break; case UIImageOrientationRightMirrored: convertedPoint.x = originalPoint.y; convertedPoint.y = imageHeight - originalPoint.x; break; default: break; } return convertedPoint; } 

以下是基于上述转换的类别方法:

 // Get converted features with respect to the imageOrientation property - (CGPoint) leftEyePositionForImage:(UIImage *)image; - (CGPoint) rightEyePositionForImage:(UIImage *)image; - (CGPoint) mouthPositionForImage:(UIImage *)image; - (CGRect) boundsForImage:(UIImage *)image; // Get normalized features (0-1) with respect to the imageOrientation property - (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image; - (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image; - (CGPoint) normalizedMouthPositionForImage:(UIImage *)image; - (CGRect) normalizedBoundsForImage:(UIImage *)image; // Get feature location inside of a given UIView size with respect to the imageOrientation property - (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize; - (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize; - (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize; - (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize; 

(另外需要注意的是,当从UIImage方向提取面部特征时,指定正确的EXIF方向。非常令人困惑…这就是我所做的:

 int exifOrientation; switch (self.image.imageOrientation) { case UIImageOrientationUp: exifOrientation = 1; break; case UIImageOrientationDown: exifOrientation = 3; break; case UIImageOrientationLeft: exifOrientation = 8; break; case UIImageOrientationRight: exifOrientation = 6; break; case UIImageOrientationUpMirrored: exifOrientation = 2; break; case UIImageOrientationDownMirrored: exifOrientation = 4; break; case UIImageOrientationLeftMirrored: exifOrientation = 5; break; case UIImageOrientationRightMirrored: exifOrientation = 7; break; default: break; } NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions]; NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage] options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}]; 

iOS 10和Swift 3

你可以检查苹果的例子,你可以检测到 条码Qrcode的 面值或值

https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html