会话重启后,AVCapture会话缓慢启动

我有一个主视图控制器,并继续到第二个视图控制器,有一个avcapturesession。 我第一次从主视图控制器继续捕捉会话控制器,大约需要50ms(使用“工具”进行检查)。 然后,我从捕获会话中回到主视图控制器,然后从主控制器返回到avcapturesession控制器。 每次从主视图控制器切换到avcapturesession需要更长的时间,并且通过第5次或第6次迭代,轮stream花费大约10秒。 (相比第一次50ms)我已经粘贴下面的avcapture会话的相关代码。 任何人都可以解决这个问题 谢谢

这个类(NSObjecttypes)pipe理第二个视图控制器的捕获会话
那实际上实施了avcapturesession

#import "CaptureSessionManager.h" @implementation CaptureSessionManager @synthesize captureSession; @synthesize previewLayer; #pragma mark Capture Session Configuration - (id)init { if ((self = [super init])) { [self setCaptureSession:[[AVCaptureSession alloc] init]]; } return self; } - (void)addVideoPreviewLayer { [self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]]; [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill]; } - (void)addVideoInput { AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if (videoDevice) { NSError *error; AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; if (!error) { if ([[self captureSession] canAddInput:videoIn]) [[self captureSession] addInput:videoIn]; //else // NSLog(@"Couldn't add video input"); } // else // NSLog(@"Couldn't create video input"); } //else // NSLog(@"Couldn't create video capture device"); } - (void)dealloc { [[self captureSession] stopRunning]; [previewLayer release], previewLayer = nil; [captureSession release], captureSession = nil; [super dealloc]; } @end 

以下是avcapture视图控制器的viewdidLoad方法:

 [self setCaptureManager:[[CaptureSessionManager alloc] init]]; [[self captureManager] addVideoInput]; [[self captureManager] addVideoPreviewLayer]; CGRect layerRect = [[[self view] layer] bounds]; [[[self captureManager] previewLayer] setBounds:layerRect]; [[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect), CGRectGetMidY(layerRect))]; [[[self view] layer] addSublayer:[[self captureManager] previewLayer]]; [[captureManager captureSession] startRunning]; -(void)viewDidDisappear:(BOOL)animated{ [super viewDidDisappear:YES]; [[[self captureManager] previewLayer]removeFromSuperlayer]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [[captureManager captureSession] stopRunning]; }); } 

内存分配

我遇到同样的问题,我发现这条线是主要的问题

 [[[self view] layer] addSublayer:[[self captureManager] previewLayer]]; 

只需从超级图层中删除预览图层,同时释放内存并且不存在内存问题。 我的释放function如下

  -(void)deallocSession { [captureVideoPreviewLayer removeFromSuperlayer]; for(AVCaptureInput *input1 in session.inputs) { [session removeInput:input1]; } for(AVCaptureOutput *output1 in session.outputs) { [session removeOutput:output1]; } [session stopRunning]; session=nil; outputSettings=nil; device=nil; input=nil; captureVideoPreviewLayer=nil; stillImageOutput=nil; self.vImagePreview=nil; } 

我在popup并推送任何其他视图之前调用了这个函数。 它解决了我的问题。

删除会话input和输出似乎为我解决了这个问题

 [captureSession stopRunning]; for(AVCaptureInput *input in captureSession.inputs) { [captureSession removeInput:input]; } for(AVCaptureOutput *output in captureSession.outputs) { [captureSession removeOutput:output]; } 

SWIFT 2.2版本的TUNER88的答案

 func stopRecording(){ captureSession.stopRunning() for input in captureSession.inputs{ captureSession.removeInput(input as! AVCaptureInput) } for output in captureSession.outputs{ captureSession.removeOutput(output as! AVCaptureOutput) } } 

您似乎没有删除avcaptureViewController中的previewLayer,它将在内部保留对捕获会话的引用。 确保您从该视图层次结构中移除了previewLayer。

这里是TUNER88答案的迅捷版本

 session.stopRunning() for(var i = 0 ; i < session.inputs.count ; i++){ session.removeInput(session.inputs[i] as! AVCaptureInput) } for(var i = 0 ; i < session.outputs.count ; i++){ session.removeOutput(session.outputs[i] as! AVCaptureOutput) } 

Swift 3

  let session = AVCaptureSession() if let outputMovie = outputMovie, outputMovie.isRecording { outputMovie.stopRecording() } self.session.stopRunning() if let inputs = self.session.inputs as? [AVCaptureDeviceInput] { for input in inputs { self.session.removeInput(input) } } if let outputs = self.session.outputs as? [AVCaptureOutput] { for output in outputs { self.session.removeOutput(output) } }