模拟AVLayerVideoGravityResizeAspectFill:裁剪和中心video模仿预览,而不会失去清晰度

基于这个SOpost ,下面的代码可以旋转,聚焦和剪切用户捕捉的video。

捕获会话使用AVCaptureSessionPresetHigh作为预设值,而预览层使用AVLayerVideoGravityResizeAspectFill作为video重力。 这个预览是非常尖锐的。

然而,导出的video并不那么清晰,表面上是因为从5S的后置摄像头的1920×1080分辨率到320×568(导出video的目标尺寸)的缩放会引起像素丢失的模糊现象?

假设没有办法从1920×1080缩放到320×568而没有模糊,问题就变成了:如何模仿预览图层的清晰度?

不知何故,苹果正在使用algorithm将1920x1080video转换成320×568的清晰预览帧。

有没有一种方法来模仿AVAssetWriter或AVAssetExportSession?

func cropVideo() { // Set start time let startTime = NSDate().timeIntervalSince1970 // Create main composition & its tracks let mainComposition = AVMutableComposition() let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid)) let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid)) // Get source video & audio tracks let videoPath = getFilePath(curSlice!.getCaptureURL()) let videoURL = NSURL(fileURLWithPath: videoPath) let videoAsset = AVURLAsset(URL: videoURL, options: nil) let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0] let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0] let videoSize = sourceVideoTrack.naturalSize // Get rounded time for video let roundedDur = floor(curSlice!.getDur() * 100) / 100 let videoDur = CMTimeMakeWithSeconds(roundedDur, 100) // Add source tracks to composition do { try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceVideoTrack, atTime: kCMTimeZero) try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceAudioTrack, atTime: kCMTimeZero) } catch { print("Error with insertTimeRange while exporting video: \(error)") } // Create video composition // -- Set video frame let outputSize = view.bounds.size let videoComposition = AVMutableVideoComposition() print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))") // -- Set parent layer let parentLayer = CALayer() parentLayer.frame = CGRectMake(0, 0, outputSize.width, outputSize.height) parentLayer.contentsGravity = kCAGravityResizeAspectFill // -- Set composition props videoComposition.renderSize = CGSize(width: outputSize.width, height: outputSize.height) videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate)) // -- Create video composition instruction let instruction = AVMutableVideoCompositionInstruction() instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoDur) // -- Use layer instruction to match video to output size, mimicking AVLayerVideoGravityResizeAspectFill let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack) let videoTransform = getResizeAspectFillTransform(videoSize, outputSize: outputSize) videoLayerInstruction.setTransform(videoTransform, atTime: kCMTimeZero) // -- Add layer instruction instruction.layerInstructions = [videoLayerInstruction] videoComposition.instructions = [instruction] // -- Create video layer let videoLayer = CALayer() videoLayer.frame = parentLayer.frame // -- Add sublayers to parent layer parentLayer.addSublayer(videoLayer) // -- Set animation tool videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer) // Create exporter let outputURL = getFilePath(getUniqueFilename(gMP4File)) let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)! exporter.outputURL = NSURL(fileURLWithPath: outputURL) exporter.outputFileType = AVFileTypeMPEG4 exporter.videoComposition = videoComposition exporter.shouldOptimizeForNetworkUse = true exporter.canPerformMultiplePassesOverSourceMediaData = true // Export to video exporter.exportAsynchronouslyWithCompletionHandler({ // Log status let asset = AVAsset(URL: exporter.outputURL!) print("Exported slice video. Tracks: \(asset.tracks.count). Duration: \(CMTimeGetSeconds(asset.duration)). Size: \(exporter.estimatedOutputFileLength). Status: \(getExportStatus(exporter)). Output URL: \(exporter.outputURL!). Export time: \( NSDate().timeIntervalSince1970 - startTime).") // Tell delegate //delegate.didEndExport(exporter) self.curSlice!.setOutputURL(exporter.outputURL!.lastPathComponent!) gUser.save() }) } // Returns transform, mimicking AVLayerVideoGravityResizeAspectFill, that converts video of <inputSize> to one of <outputSize> private func getResizeAspectFillTransform(videoSize: CGSize, outputSize: CGSize) -> CGAffineTransform { // Compute ratios between video & output sizes let widthRatio = outputSize.width / videoSize.width let heightRatio = outputSize.height / videoSize.height // Set scale to larger of two ratios since goal is to fill output bounds let scale = widthRatio >= heightRatio ? widthRatio : heightRatio // Compute video size after scaling let newWidth = videoSize.width * scale let newHeight = videoSize.height * scale // Compute translation required to center image after scaling // -- Assumes CoreAnimationTool places video frame at (0, 0). Because scale transform is applied first, we must adjust // each translation point by scale factor. let translateX = (outputSize.width - newWidth) / 2 / scale let translateY = (outputSize.height - newHeight) / 2 / scale // Set transform to resize video while retaining aspect ratio let resizeTransform = CGAffineTransformMakeScale(scale, scale) // Apply translation & create final transform let finalTransform = CGAffineTransformTranslate(resizeTransform, translateX, translateY) // Return final transform return finalTransform } 

Tim的代码拍摄的320x568video:

在这里输入图像说明

用Tim的代码拍摄640x1136video: 在这里输入图像说明

尝试这个。 在Swift中启动一个新的Single View项目,用这个代码replaceViewController,你应该很好的去!

我已经设置了一个与输出大小不同的previewLayer,在文件顶部改变它。

我添加了一些基本的方向支持。 输出景观比较略有不同的尺寸。 肖像。 你可以在这里指定你喜欢的任何video大小尺寸,它应该工作正常。

检出输出文件的编解码器和大小的videoSettings字典(第278行)。 您也可以在这里添加其他设置来处理keyFrameIntervals等来调整outputsize。

我添加了一个录制图像来显示录制的时间(点击开始,点击结束),您需要将一些资源添加到Assets.xcassets中(称为录制)(或者注释掉第106行,以便加载它)。

这是非常多的。 祝你好运!

哦,它将video转储到项目目录中,您需要转到“窗口/设备”并下载“容器”以轻松查看video。 在TODO中有一个部分,您可以将文件复制到PhotoLibrary中(使testing更容易)。

 import UIKit import AVFoundation class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate { let CAPTURE_SIZE_LANDSCAPE: CGSize = CGSizeMake(1280, 720) let CAPTURE_SIZE_PORTRAIT: CGSize = CGSizeMake(720, 1280) var recordingImage : UIImageView = UIImageView() var previewLayer : AVCaptureVideoPreviewLayer? var audioQueue : dispatch_queue_t? var videoQueue : dispatch_queue_t? let captureSession = AVCaptureSession() var assetWriter : AVAssetWriter? var assetWriterInputCamera : AVAssetWriterInput? var assetWriterInputAudio : AVAssetWriterInput? var outputConnection: AVCaptureConnection? var captureDeviceBack : AVCaptureDevice? var captureDeviceFront : AVCaptureDevice? var captureDeviceMic : AVCaptureDevice? var sessionSetupDone: Bool = false var isRecordingStarted = false //var recordingStartedTime = kCMTimeZero var videoOutputURL : NSURL? var captureSize: CGSize = CGSizeMake(1280, 720) var previewFrame: CGRect = CGRectMake(0, 0, 180, 360) var captureDeviceTrigger = true var captureDevice: AVCaptureDevice? { get { return captureDeviceTrigger ? captureDeviceFront : captureDeviceBack } } override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask { return UIInterfaceOrientationMask.AllButUpsideDown } override func shouldAutorotate() -> Bool { if isRecordingStarted { return false } if UIDevice.currentDevice().orientation == UIDeviceOrientation.PortraitUpsideDown { return false } if let cameraPreview = self.previewLayer { if let connection = cameraPreview.connection { if connection.supportsVideoOrientation { switch UIDevice.currentDevice().orientation { case .LandscapeLeft: connection.videoOrientation = .LandscapeRight case .LandscapeRight: connection.videoOrientation = .LandscapeLeft case .Portrait: connection.videoOrientation = .Portrait case .FaceUp: return false case .FaceDown: return false default: break } } } } return true } override func viewDidLoad() { super.viewDidLoad() setupViewControls() //self.recordingStartedTime = kCMTimeZero // Setup capture session related logic videoQueue = dispatch_queue_create("video_write_queue", DISPATCH_QUEUE_SERIAL) audioQueue = dispatch_queue_create("audio_write_queue", DISPATCH_QUEUE_SERIAL) setupCaptureDevices() pre_start() } //MARK: UI methods func setupViewControls() { // TODO: I have an image (red circle) in an Assets.xcassets. Replace the following with your own image recordingImage.frame = CGRect(x: 0, y: 0, width: 50, height: 50) recordingImage.image = UIImage(named: "recording") recordingImage.hidden = true self.view.addSubview(recordingImage) // Setup tap to record and stop let tapGesture = UITapGestureRecognizer(target: self, action: "didGetTapped:") tapGesture.numberOfTapsRequired = 1 self.view.addGestureRecognizer(tapGesture) } func didGetTapped(selector: UITapGestureRecognizer) { if self.isRecordingStarted { self.view.gestureRecognizers![0].enabled = false recordingImage.hidden = true self.stopRecording() } else { recordingImage.hidden = false self.startRecording() } self.isRecordingStarted = !self.isRecordingStarted } func switchCamera(selector: UIButton) { self.captureDeviceTrigger = !self.captureDeviceTrigger pre_start() } //MARK: Video logic func setupCaptureDevices() { let devices = AVCaptureDevice.devices() for device in devices { if device.hasMediaType(AVMediaTypeVideo) { if device.position == AVCaptureDevicePosition.Front { captureDeviceFront = device as? AVCaptureDevice NSLog("Video Controller: Setup. Front camera is found") } if device.position == AVCaptureDevicePosition.Back { captureDeviceBack = device as? AVCaptureDevice NSLog("Video Controller: Setup. Back camera is found") } } if device.hasMediaType(AVMediaTypeAudio) { captureDeviceMic = device as? AVCaptureDevice NSLog("Video Controller: Setup. Audio device is found") } } } func alertPermission() { let permissionAlert = UIAlertController(title: "No Permission", message: "Please allow access to Camera and Microphone", preferredStyle: UIAlertControllerStyle.Alert) permissionAlert.addAction(UIAlertAction(title: "Go to settings", style: .Default, handler: { (action: UIAlertAction!) in print("Video Controller: Permission for camera/mic denied. Going to settings") UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!) print(UIApplicationOpenSettingsURLString) })) presentViewController(permissionAlert, animated: true, completion: nil) } func pre_start() { NSLog("Video Controller: pre_start") let videoPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo) let audioPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeAudio) if (videoPermission == AVAuthorizationStatus.Denied) || (audioPermission == AVAuthorizationStatus.Denied) { self.alertPermission() pre_start() return } if (videoPermission == AVAuthorizationStatus.Authorized) { self.start() return } AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted :Bool) -> Void in self.pre_start() }) } func start() { NSLog("Video Controller: start") if captureSession.running { captureSession.beginConfiguration() if let currentInput = captureSession.inputs[0] as? AVCaptureInput { captureSession.removeInput(currentInput) } do { try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice)) } catch { print("Video Controller: begin session. Error adding video input device") } captureSession.commitConfiguration() return } do { try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice)) try captureSession.addInput(AVCaptureDeviceInput(device: captureDeviceMic)) } catch { print("Video Controller: start. error adding device: \(error)") } if let layer = AVCaptureVideoPreviewLayer(session: captureSession) { self.previewLayer = layer layer.videoGravity = AVLayerVideoGravityResizeAspect if let layerConnection = layer.connection { if UIDevice.currentDevice().orientation == .LandscapeRight { layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft } else if UIDevice.currentDevice().orientation == .LandscapeLeft { layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight } else if UIDevice.currentDevice().orientation == .Portrait { layerConnection.videoOrientation = AVCaptureVideoOrientation.Portrait } } // TODO: Set the output size of the Preview Layer here layer.frame = previewFrame self.view.layer.insertSublayer(layer, atIndex: 0) } let bufferVideoQueue = dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL) let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue) captureSession.addOutput(videoOutput) if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) { self.outputConnection = connection } let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL) let audioOutput = AVCaptureAudioDataOutput() audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue) captureSession.addOutput(audioOutput) captureSession.startRunning() } func getAssetWriter() -> AVAssetWriter? { NSLog("Video Controller: getAssetWriter") let fileManager = NSFileManager.defaultManager() let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask) guard let documentDirectory: NSURL = urls.first else { print("Video Controller: getAssetWriter: documentDir Error") return nil } let local_video_name = NSUUID().UUIDString + ".mp4" self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name) guard let url = self.videoOutputURL else { return nil } self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4) guard let writer = self.assetWriter else { return nil } let videoSettings: [String : AnyObject] = [ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : captureSize.width, AVVideoHeightKey : captureSize.height, ] assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings) assetWriterInputCamera?.expectsMediaDataInRealTime = true writer.addInput(assetWriterInputCamera!) let audioSettings : [String : AnyObject] = [ AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC), AVNumberOfChannelsKey : 2, AVSampleRateKey : NSNumber(double: 44100.0) ] assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings) assetWriterInputAudio?.expectsMediaDataInRealTime = true writer.addInput(assetWriterInputAudio!) return writer } func configurePreset() { NSLog("Video Controller: configurePreset") if captureSession.canSetSessionPreset(AVCaptureSessionPreset1280x720) { captureSession.sessionPreset = AVCaptureSessionPreset1280x720 } else { captureSession.sessionPreset = AVCaptureSessionPreset1920x1080 } } func startRecording() { NSLog("Video Controller: Start recording") captureSize = UIDeviceOrientationIsLandscape(UIDevice.currentDevice().orientation) ? CAPTURE_SIZE_LANDSCAPE : CAPTURE_SIZE_PORTRAIT if let connection = self.outputConnection { if connection.supportsVideoOrientation { if UIDevice.currentDevice().orientation == .LandscapeRight { connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft NSLog("orientation: right") } else if UIDevice.currentDevice().orientation == .LandscapeLeft { connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight NSLog("orientation: left") } else { connection.videoOrientation = AVCaptureVideoOrientation.Portrait NSLog("orientation: portrait") } } } if let writer = getAssetWriter() { self.assetWriter = writer let recordingClock = self.captureSession.masterClock writer.startWriting() writer.startSessionAtSourceTime(CMClockGetTime(recordingClock)) } } func stopRecording() { NSLog("Video Controller: Stop recording") if let writer = self.assetWriter { writer.finishWritingWithCompletionHandler{Void in print("Recording finished") // TODO: Handle the video file, copy it from the temp directory etc. } } } //MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { if !self.isRecordingStarted { return } if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData { dispatch_async(audioQueue!) { audio.appendSampleBuffer(sampleBuffer) } return } if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData { dispatch_async(videoQueue!) { camera.appendSampleBuffer(sampleBuffer) } } } } 

其他编辑信息

从我们在评论中的其他对话中可以看出,你想要的是减less输出video的物理尺寸,同时保持尺寸尽可能高(以保持质量)。 请记住,在屏幕上放置图层的大小是POINT,而不是PIXELS。 你正在用像素编写一个输出文件 – 这与iPhone屏幕参考单元不是1:1的比较。

要减小输出文件的大小,您有两个简单的选项:

  1. 降低分辨率 – 但是如果太小,回放时就会失去质量,特别是回放时,质量会再次提高。 尝试640×360或720×480的输出像素。
  2. 调整压缩设置。 iPhone具有默认设置,通常会产生更高质量(更大的输出文件大小)的video。

用这些选项replacevideo设置,看看你如何去:

  let videoSettings: [String : AnyObject] = [ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : captureSize.width, AVVideoHeightKey : captureSize.height, AVVideoCompressionPropertiesKey : [ AVVideoAverageBitRateKey : 2000000, AVVideoProfileLevelKey : H264_Main_4_1, AVVideoMaxKeyFrameIntervalKey : 90, ] ] 

AVCompressionProperties告诉AVFoundation如何实际压缩video。 比特率越低,压缩率越高(因此stream动性越好,但是使用的磁盘空间也越less,质量也就越差)。 MaxKeyFrame间隔是多长时间一次写出一个未压缩的帧,设置这个更高(在我们~30帧每秒video90将会每1.5秒一次)也降低了质量,但也减小了尺寸。 你会发现这里引用的常量https://developer.apple.com/library/prerelease/ios/documentation/AVFoundation/Reference/AVFoundation_Constants/index.html#//apple_ref/doc/constant_group/Video_Settings