如何将AVFoundation中的捕捉声音静音?

我想要使​​用AVfoundation没有任何声音的图像..(是的,我一直牢记在心..用户的select将实现此function)

关于堆栈溢出的2个问题给出了最多的信息:

AVFoundation,如何captureStillImageAsynchronouslyFromConnectionclosures快门声音?

在iPhone上静音AVCapture快门声音

在第一个问题..没有答案接受和证实….

第二 .. AVCaptureVideoDataOutput被反驳。

这两个答案指的是捕捉video帧…我认为是正确的…问题是AVfoundation库是不是真的很容易掌握,我不能真正得到它的窍门。(捕获图像使用AVCaptureStillImageOutput本身对我来说很难)..任何人都可以帮助指点我或提供我一个很好的来源,没有声音捕捉图像。

非常感谢。

我发现代码在这里做…

http://www.benjaminloulier.com/articles/ios4-and-direct-access-to-the-camera

概述的重要部分

像这样设置你的会话

 -(void)initialize_and_Start_Session_without_CaptureSound { /*We setup the input*/ AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil]; /*We setupt the output*/ AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; /*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue. If you don't want this behaviour set the property to NO */ captureOutput.alwaysDiscardsLateVideoFrames = YES; /*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate. In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that we are not able to process more than 10 frames per second.*/ //captureOutput.minFrameDuration = CMTimeMake(1, 10); /*We create a serial queue to handle the processing of our frames*/ dispatch_queue_t queue; queue = dispatch_queue_create("cameraQueue", NULL); [captureOutput setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Set the video output to store frame in BGRA (It is supposed to be faster) NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [captureOutput setVideoSettings:videoSettings]; /*And we create a capture session*/ self.session = [[AVCaptureSession alloc] init]; /*We add input and output*/ [self.session addInput:captureInput]; [self.session addOutput:captureOutput]; /*We start the capture*/ [self.session startRunning]; } 

你会得到相机输出在以下方法..我做一个图像,并将其添加到我的父视图..您可以更改它到您的需要

 #pragma mark AVCaptureSession delegate - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { /*We create an autorelease pool because as we are not in the main_queue our code is not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/ if (captureImageNow) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); /*Lock the image buffer*/ CVPixelBufferLockBaseAddress(imageBuffer,0); /*Get information about the image*/ uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); /*Create a CGImageRef from the CVImageBufferRef*/ CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); /*We release some components*/ CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); /*We display the result on the custom layer. All the display stuff must be done in the main thread because UIKit is no thread safe, and as we are not in the main thread (remember we didn't use the main_queue) we use performSelectorOnMainThread to call our CALayer and tell it to display the CGImage.*/ /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly). Same thing as for the CALayer we are not in the main thread so ...*/ self.captureImage = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; /*We relase the CGImageRef*/ CGImageRelease(newImage); [self performSelectorOnMainThread:@selector(AddImageToParentView) withObject:nil waitUntilDone:YES]; /*We unlock the image buffer*/ CVPixelBufferUnlockBaseAddress(imageBuffer,0); [pool drain]; captureImageNow = NO; } }