CIDetector在面部特征上给出错误的位置

现在我知道坐标系已经搞乱了。 我曾尝试颠倒视图和imageView,什么都没有。 然后我试图扭转function上的坐标,我仍然得到同样的问题。 我知道它会检测到脸部,眼睛和嘴巴,但是当我尝试从样本代码中放置重叠的盒子时,它们不在位置(确切地说,它们在屏幕右侧)。 我难以理解为什么会发生这种情况。

我张贴一些代码,因为我知道你们中的一些人喜欢特异性:

-(void)faceDetector { // Load the picture for face detection // UIImageView* image = [[UIImageView alloc] initWithImage:mainImage]; [self.imageView setImage:mainImage]; [self.imageView setUserInteractionEnabled:YES]; // Draw the face detection image // [self.view addSubview:self.imageView]; // Execute the method used to markFaces in background // [self performSelectorInBackground:@selector(markFaces:) withObject:self.imageView]; // flip image on y-axis to match coordinate system used by core image // [self.imageView setTransform:CGAffineTransformMakeScale(1, -1)]; // flip the entire window to make everything right side up // [self.view setTransform:CGAffineTransformMakeScale(1, -1)]; // [toolbar setTransform:CGAffineTransformMakeScale(1, -1)]; [toolbar setFrame:CGRectMake(0, 0, 320, 44)]; // Execute the method used to markFaces in background [self performSelectorInBackground:@selector(markFaces:) withObject:_imageView]; // [self markFaces:self.imageView]; } -(void)markFaces:(UIImageView *)facePicture { // draw a CI image with the previously loaded face detection picture CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage]; // create a face detector - since speed is not an issue we'll use a high accuracy // detector CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]]; // CGAffineTransform transform = CGAffineTransformMakeScale(1, -1); CGAffineTransform transform = CGAffineTransformMakeScale(self.view.frame.size.width/mainImage.size.width, -self.view.frame.size.height/mainImage.size.height); transform = CGAffineTransformTranslate(transform, 0, -self.imageView.bounds.size.height); // create an array containing all the detected faces from the detector NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation]; NSArray* features = [detector featuresInImage:image options:imageOptions]; // NSArray* features = [detector featuresInImage:image]; NSLog(@"Marking Faces: Count: %d", [features count]); // we'll iterate through every detected face. CIFaceFeature provides us // with the width for the entire face, and the coordinates of each eye // and the mouth if detected. Also provided are BOOL's for the eye's and // mouth so we can check if they already exist. for(CIFaceFeature* faceFeature in features) { // create a UIView using the bounds of the face // UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds]; CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform); // get the width of the face // CGFloat faceWidth = faceFeature.bounds.size.width; CGFloat faceWidth = faceRect.size.width; // create a UIView using the bounds of the face UIView *faceView = [[UIView alloc] initWithFrame:faceRect]; // add a border around the newly created UIView faceView.layer.borderWidth = 1; faceView.layer.borderColor = [[UIColor redColor] CGColor]; // add the new view to create a box around the face [self.imageView addSubview:faceView]; NSLog(@"Face -> X: %f, Y: %f, W: %f, H: %f",faceRect.origin.x, faceRect.origin.y, faceRect.size.width, faceRect.size.height); if(faceFeature.hasLeftEyePosition) { // create a UIView with a size based on the width of the face CGPoint leftEye = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform); UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEye.x-faceWidth*0.15, leftEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)]; // change the background color of the eye view [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]]; // set the position of the leftEyeView based on the face [leftEyeView setCenter:leftEye]; // round the corners leftEyeView.layer.cornerRadius = faceWidth*0.15; // add the view to the window [self.imageView addSubview:leftEyeView]; NSLog(@"Has Left Eye -> X: %f, Y: %f",leftEye.x, leftEye.y); } if(faceFeature.hasRightEyePosition) { // create a UIView with a size based on the width of the face CGPoint rightEye = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform); UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(rightEye.x-faceWidth*0.15, rightEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)]; // change the background color of the eye view [leftEye setBackgroundColor:[[UIColor yellowColor] colorWithAlphaComponent:0.3]]; // set the position of the rightEyeView based on the face [leftEye setCenter:rightEye]; // round the corners leftEye.layer.cornerRadius = faceWidth*0.15; // add the new view to the window [self.imageView addSubview:leftEye]; NSLog(@"Has Right Eye -> X: %f, Y: %f", rightEye.x, rightEye.y); } // if(faceFeature.hasMouthPosition) // { // // create a UIView with a size based on the width of the face // UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)]; // // change the background color for the mouth to green // [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]]; // // set the position of the mouthView based on the face // [mouth setCenter:faceFeature.mouthPosition]; // // round the corners // mouth.layer.cornerRadius = faceWidth*0.2; // // add the new view to the window // [self.imageView addSubview:mouth]; // } } } 

我知道代码段有点长,但这是它的主要要点。 他们只有其他相关的事情是,我有一个UIImagePickerController,让用户select一个现有的图像或采取一个新的select。 然后,图像被设置到屏幕的UIImageView中,与各种框和圆一起显示,但没有运气显示出来:/

任何帮助,将不胜感激。 谢谢〜

更新:

我现在添加了一张现在的照片,这样你们可以有一个想法,我应用了新的缩放,这个缩放比我想要的更好,但是没有任何改变。

错误的脸和眼睛的位置

只需使用Apple的SquareCam应用程序中的代码即可。 它在正面和背面摄像头的任何方向上都能正确alignment方块。 沿着faceRect插入正确的眼睛和嘴巴的位置。 注意:您必须将x位置与脸部特征中的y位置交换。 不知道为什么你必须做交换,但是给了你正确的位置。

除非图像视图与图像具有完全相同的大小,否则您的变换缺less比例。 从…开始

  CGAffineTransformMakeScale( viewWidth / imageWidth, - viewHeight / imageHeight ) 

其中viewWidthviewHeight是您的视图的大小, imageWidthimageHeight是您的图像的大小。

所以在玩过@Sven之后,我想通了。

 CGAffineTransform transform = CGAffineTransformMakeScale(self.imageView.bounds.size.width/mainImage.size.width, -self.imageView.bounds.size.height/mainImage.size.height); transform = CGAffineTransformRotate(transform, degreesToRadians(270)); 

我必须调整转换大小bw的图像大小和imageview的大小然后由于某种原因,我不得不旋转它,但它现在完美

Interesting Posts