glReadPixels只保存1/4屏幕大小的快照

我正在为客户端开发增强现实应用程序。 OpenGL和EAGL部分已经在Unity 3D中完成,并在我的应用程序中实现了View。

我现在需要的是一个button,捕捉OpenGL内容的屏幕截图,这是最后面的视图。

我试图自己写,但是当我点击一个button与分配的IBAction,它只保存了1/4的屏幕(左下angular) – 虽然它确实保存到相机胶卷。

所以基本上,我怎样才能保存整个screenize,而不是只有四分之一?

这里是我的代码的方法:

-(IBAction)tagBillede:(id)sender { UIImage *outputImage = nil; CGRect s = CGRectMake(0, 0, 320, 480); uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4); if (!buffer) goto error; glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL); if (!ref) goto error; CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault); if (!iref) goto error; size_t width = CGImageGetWidth(iref); size_t height = CGImageGetHeight(iref); size_t length = width * height * 4; uint32_t *pixels = (uint32_t *)malloc(length); if (!pixels) goto error; CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4, CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big); if (!context) goto error; CGAffineTransform transform = CGAffineTransformIdentity; transform = CGAffineTransformMakeTranslation(0.0f, height); transform = CGAffineTransformScale(transform, 1.0, -1.0); CGContextConcatCTM(context, transform); CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref); CGImageRef outputRef = CGBitmapContextCreateImage(context); if (!outputRef) goto error; outputImage = [UIImage imageWithCGImage: outputRef]; if (!outputImage) goto error; CGDataProviderRelease(ref); CGImageRelease(iref); CGContextRelease(context); CGImageRelease(outputRef); free(pixels); free(buffer); UIImageWriteToSavedPhotosAlbum(outputImage, self, @selector(image: didFinishSavingWithError: contextInfo:), nil); } 

我怀疑你正在使用具有640×960视网膜显示器的设备。 您需要考虑屏幕比例; 在非Retina显示器上为1.0,在Retina显示器上为2.0。 尝试像这样初始化s

 CGFloat scale = UIScreen.mainScreen.scale; CGRect s = CGRectMake(0, 0, 320 * scale, 480 * scale); 

如果设备是视网膜设备,则需要自己调整opengl的东西。 你实际上是指定你只需要捕获一半宽度和一半高度的左下angular。

您需要将视网膜屏幕的宽度和高度加倍,但实际上,您应该将其乘以屏幕的大小:

 CGFloat scale = [[UIScreen mainScreen] scale]; CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale); 

以为我会叮叮当当地同时表示感谢:)

我现在像一个魅力工作,这里是清理的代码:

 UIImage *outputImage = nil; CGFloat scale = [[UIScreen mainScreen] scale]; CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale); uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4); glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL); CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault); size_t width = CGImageGetWidth(iref); size_t height = CGImageGetHeight(iref); size_t length = width * height * 4; uint32_t *pixels = (uint32_t *)malloc(length); CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * 4, CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big); CGAffineTransform transform = CGAffineTransformIdentity; transform = CGAffineTransformMakeTranslation(0.0f, height); transform = CGAffineTransformScale(transform, 1.0, -1.0); CGContextConcatCTM(context, transform); CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref); CGImageRef outputRef = CGBitmapContextCreateImage(context); outputImage = [UIImage imageWithCGImage: outputRef]; CGDataProviderRelease(ref); CGImageRelease(iref); CGContextRelease(context); CGImageRelease(outputRef); free(pixels); free(buffer); UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);