CoreImage / CoreVideo中的内存泄漏

我正在构build一个iOS应用程序,可以进行一些基本检测。 我从AVCaptureVideoDataOutput获取原始帧,将CMSampleBufferRef转换为UIImage,调整UIImage的大小,然后将其转换为CVPixelBufferRef。 至于我可以检测到仪器泄漏是我将CGImage转换为CVPixelBufferRef的最后一部分。

这是我使用的代码:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]]; // ASMotionDetect is my class for detection and I use videof to calculate the movement } -(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer { UIImage *img; CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); CVPixelBufferUnlockBaseAddress(imageBuffer,0); /* CVBufferRelease(imageBuffer); */ // do not call this! img = [UIImage imageWithCGImage:newImage]; CGImageRelease(newImage); newContext = nil; img = [self resizeImageToSquare:img]; return img; } -(UIImage*)resizeImageToSquare:(UIImage*)_temp { UIImage *img; int w = _temp.size.width; int h = _temp.size.height; CGRect rect; if (w>h) { rect = CGRectMake((wh)/2,0,h,h); } else { rect = CGRectMake(0, (hw)/2, w, w); } // img = [self crop:_temp inRect:rect]; return img; } -(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{ UIImage *sourceImage = image; CGRect selectionRect = rect; CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size); CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect); UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation]; CGImageRelease(resultImageRef); return resultImage; } 

在我的探测课上,我有:

 - (id)initWithSampleImage:(UIImage*)sampleImage { if ((self = [super init])) { _frame = new CVMatOpaque(); _histograms = new CVMatNDOpaque[kGridSize * kGridSize]; [self extractFrameFromImage:sampleImage]; } return self; } - (void)extractFrameFromImage:(UIImage*)sampleImage { CGImageRef imageRef = [sampleImage CGImage]; CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef]; CVPixelBufferLockBaseAddress(imageBuffer, 0); // Collect some information required to extract the frame. void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); // Extract the frame, convert it to grayscale, and shove it in _frame. cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow); cv::cvtColor(frame, frame, CV_BGR2GRAY); _frame->matrix = frame; CVPixelBufferUnlockBaseAddress(imageBuffer, 0); CGImageRelease(imageRef); } - (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image { CVPixelBufferRef pxbuffer = NULL; int width = CGImageGetWidth(image)*2; int height = CGImageGetHeight(image)*2; NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil]; CVPixelBufferPoolRef pixelBufferPool; CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool); NSParameterAssert(theError == kCVReturnSuccess); CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer); NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); CVPixelBufferLockBaseAddress(pxbuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); NSParameterAssert(pxdata != NULL); CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, width*4, rgbColorSpace, kCGImageAlphaNoneSkipFirst); NSParameterAssert(context); /* here is the problem: */ CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); CGColorSpaceRelease(rgbColorSpace); CGContextRelease(context); CVPixelBufferUnlockBaseAddress(pxbuffer, 0); return pxbuffer; } 

随着仪器,我发现问题是与CVPixelBufferRef分配,但我不明白为什么 – 有人可以看到这个问题?

谢谢

-pixelBufferFromCGImage:pxBufferpixelBufferPool被释放。 这对于pxBuffer来说是pxBuffer ,因为它是一个返回值,但对于pixelBufferPool不是pixelBufferPool – 你创build并泄漏一个方法的调用。

快速解决应该是

  1. 释放pixelBufferPool in -pixelBufferFromCGImage:
  2. pxBuffer释放pxBufferpxBuffer的返回值) -extractFrameFromImage:

您还应该重命名-pixelBufferFromCGImage: to -createPixelBufferFromCGImage:以明确它返回一个保留的对象。