为什么glReadPixels()在iOS 6.0的代码中失败?

以下是用于从OpenGL ES场景读取图像的代码:

-(UIImage *)getImage{ GLint width; GLint height; glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width); glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height); NSLog(@"%d %d",width,height); NSInteger myDataLength = width * height * 4; // allocate array and read pixels into it. GLubyte *buffer = (GLubyte *) malloc(myDataLength); glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); // gl renders "upside down" so swap top to bottom into new array. // there's gotta be a better way, but this works. GLubyte *buffer2 = (GLubyte *) malloc(myDataLength); for(int y = 0; y < height; y++) { for(int x = 0; x < width * 4; x++) { buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x]; } } // make data provider with data. CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL); // prep the ingredients int bitsPerComponent = 8; int bitsPerPixel = 32; int bytesPerRow = 4 * width; CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; // make the cgimage CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); // then make the uiimage from that UIImage *myImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpaceRef); free(buffer); free(buffer2); return myImage; } 

这是在iOS 5.x和更低的版本,但在iOS 6.0这现在返回一个黑色的图像。 为什么glReadPixels()在iOS 6.0上失败?

 CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer; eaglLayer.drawableProperties = @{ kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8 }; 

 kEAGLDrawablePropertyRetainedBacking = YES 

(我不知道为什么这个技巧进行得很顺利..///)

bro试试这个方法来获得你的屏幕截图。 输出图像是MailImage

 - (UIImage*)screenshot { // Create a graphics context with the target size // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext CGSize imageSize = [[UIScreen mainScreen] bounds].size; if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0); else UIGraphicsBeginImageContext(imageSize); CGContextRef context = UIGraphicsGetCurrentContext(); // Iterate over every window from back to front for (UIWindow *window in [[UIApplication sharedApplication] windows]) { if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen]) { // -renderInContext: renders in the coordinate space of the layer, // so we must first apply the layer's geometry to the graphics context CGContextSaveGState(context); // Center the context around the window's anchor point CGContextTranslateCTM(context, [window center].x, [window center].y); // Apply the window's transform about the anchor point CGContextConcatCTM(context, [window transform]); // Offset by the portion of the bounds left of and above the anchor point CGContextTranslateCTM(context, -[window bounds].size.width * [[window layer] anchorPoint].x, -[window bounds].size.height * [[window layer] anchorPoint].y); // Render the layer hierarchy to the current context [[window layer] renderInContext:context]; // Restore the context CGContextRestoreGState(context); } } // Retrieve the screenshot image Mailimage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return Mailimage; }