使用AVFoundationlogging方形video并添加水印

我正在尝试做的插图

我正在尝试执行以下操作:

  • 播放音乐
  • 录制一个方形video(我在视图中有一个容器,显示你正在录制的内容)
  • 在顶部添加一个标签,在方形video的左下angular添加应用程序的图标和名称。

到目前为止,我设法播放音乐,将AVCaptureVideoPreviewLayer显示在一个方形容器中,并将video保存到相机胶卷中。

问题是,我几乎找不到一些关于使用AVFoundation的模糊教程,这是我的第一个应用程序,使事情变得相当困难。

我设法做这些事情,但我仍然不明白AVFoundation是如何工作的。 这个文档对于初学者来说是模糊的,我还没有find一个我特别想要的教程,并且把多个教程(用Obj C编写)使这个变得不可能。 我的问题如下:

  1. video不能保存为正方形。 (提到该应用程序不支持横向方向)
  2. video没有audio。 (我认为我应该添加一些video以外的audioinput)
  3. 如何将水印添加到video?
  4. 我有一个错误:我创build了一个视图(messageView;见代码)与文本和图像,让用户知道video已保存到相机胶卷。 但是如果我第二次开始录制,video将在录制video时出现,而不是在录制之后出现。 我怀疑这与每个video的命名是相同的。

所以我做了准备:

override func viewDidLoad() { super.viewDidLoad() // Preset For High Quality captureSession.sessionPreset = AVCaptureSessionPresetHigh // Get available devices capable of recording video let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as! [AVCaptureDevice] // Get back camera for device in devices { if device.position == AVCaptureDevicePosition.Back { currentDevice = device } } // Set Input let captureDeviceInput: AVCaptureDeviceInput do { captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice) } catch { print(error) return } // Set Output videoFileOutput = AVCaptureMovieFileOutput() // Configure Session w/ Input & Output Devices captureSession.addInput(captureDeviceInput) captureSession.addOutput(videoFileOutput) // Show Camera Preview cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) view.layer.addSublayer(cameraPreviewLayer!) cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill let width = view.bounds.width*0.85 cameraPreviewLayer?.frame = CGRectMake(0, 0, width, width) // Bring Record Button To Front view.bringSubviewToFront(recordButton) captureSession.startRunning() // // Bring Message To Front // view.bringSubviewToFront(messageView) // view.bringSubviewToFront(messageText) // view.bringSubviewToFront(messageImage) } 

然后当我按下录音button时:

 @IBAction func capture(sender: AnyObject) { if !isRecording { isRecording = true UIView.animateWithDuration(0.5, delay: 0.0, options: [.Repeat, .Autoreverse, .AllowUserInteraction], animations: { () -> Void in self.recordButton.transform = CGAffineTransformMakeScale(0.5, 0.5) }, completion: nil) let outputPath = NSTemporaryDirectory() + "output.mov" let outputFileURL = NSURL(fileURLWithPath: outputPath) videoFileOutput?.startRecordingToOutputFileURL(outputFileURL, recordingDelegate: self) } else { isRecording = false UIView.animateWithDuration(0.5, delay: 0, options: [], animations: { () -> Void in self.recordButton.transform = CGAffineTransformMakeScale(1.0, 1.0) }, completion: nil) recordButton.layer.removeAllAnimations() videoFileOutput?.stopRecording() } } 

录像后:

 func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) { let outputPath = NSTemporaryDirectory() + "output.mov" if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputPath) { UISaveVideoAtPathToSavedPhotosAlbum(outputPath, self, nil, nil) // Show Success Message UIView.animateWithDuration(0.4, delay: 0, options: [], animations: { self.messageView.alpha = 0.8 }, completion: nil) UIView.animateWithDuration(0.4, delay: 0, options: [], animations: { self.messageText.alpha = 1.0 }, completion: nil) UIView.animateWithDuration(0.4, delay: 0, options: [], animations: { self.messageImage.alpha = 1.0 }, completion: nil) // Hide Message UIView.animateWithDuration(0.4, delay: 1, options: [], animations: { self.messageView.alpha = 0 }, completion: nil) UIView.animateWithDuration(0.4, delay: 1, options: [], animations: { self.messageText.alpha = 0 }, completion: nil) UIView.animateWithDuration(0.4, delay: 1, options: [], animations: { self.messageImage.alpha = 0 }, completion: nil) } } 

那么我需要做什么来解决这个问题呢? 我一直在search和查看教程,但我无法弄清楚…我读了关于添加水印,我看到它与在video顶部添加CALayer有关。 但显然我不能这样做,因为我甚至不知道如何使video广场和添加audio。

一些东西:

至于audio去,你添加一个video(相机)input,但没有audioinput。 所以要做到这一点。

  let audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio) do { let input = try AVCaptureDeviceInput(device: audioInputDevice) if sourceAVFoundation.captureSession.canAddInput(input) { sourceAVFoundation.captureSession.addInput(input) } else { NSLog("ERROR: Can't add audio input") } } catch let error { NSLog("ERROR: Getting input device: \(error)") } 

为了使video正方形,你将不得不看AVAssetWriter而不是AVCaptureFileOutput。 这比较复杂,但是你会得到更多的“权力”。 你已经创build了一个AVCaptureSession,这很好,为了连接AssetWriter,你需要做这样的事情:

  let fileManager = NSFileManager.defaultManager() let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask) guard let documentDirectory: NSURL = urls.first else { print("Video Controller: getAssetWriter: documentDir Error") return nil } let local_video_name = NSUUID().UUIDString + ".mp4" self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name) guard let url = self.videoOutputURL else { return nil } self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4) guard let writer = self.assetWriter else { return nil } //TODO: Set your desired video size here! let videoSettings: [String : AnyObject] = [ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : captureSize.width, AVVideoHeightKey : captureSize.height, AVVideoCompressionPropertiesKey : [ AVVideoAverageBitRateKey : 200000, AVVideoProfileLevelKey : AVVideoProfileLevelH264Baseline41, AVVideoMaxKeyFrameIntervalKey : 90, ], ] assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings) assetWriterInputCamera?.expectsMediaDataInRealTime = true writer.addInput(assetWriterInputCamera!) let audioSettings : [String : AnyObject] = [ AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC), AVNumberOfChannelsKey : 2, AVSampleRateKey : NSNumber(double: 44100.0) ] assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings) assetWriterInputAudio?.expectsMediaDataInRealTime = true writer.addInput(assetWriterInputAudio!) 

一旦你有AssetWriter设置…然后连接一些输出的video和audio

  let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL) let audioOutput = AVCaptureAudioDataOutput() audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue) captureSession.addOutput(audioOutput) // Always add video last... let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue) captureSession.addOutput(videoOutput) if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) { if connection.supportsVideoOrientation { // Force recording to portrait connection.videoOrientation = AVCaptureVideoOrientation.Portrait } self.outputConnection = connection } captureSession.startRunning() 

最后你需要捕获缓冲区并处理这些东西…确保你让你的类成为AVCaptureVideoDataOutputSampleBufferDelegate和AVCaptureAudioDataOutputSampleBufferDelegate的委托

 //MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { if !self.isRecordingStarted { return } if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData { dispatch_async(audioQueue!) { audio.appendSampleBuffer(sampleBuffer) } return } if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData { dispatch_async(videoQueue!) { camera.appendSampleBuffer(sampleBuffer) } } } 

有几个缺失的部分,但希望这足以让你与文件一起解决。

最后,如果要添加水印,可以通过多种方式实时完成,但是一种可能的方法是修改sampleBuffer,然后将水印写入图像。 你会发现StackOverflow处理的其他问题。