使用AVMutableComposition导出镜像video会导致resize问题
如果我关闭前置摄像头上的镜像,一切都按预期工作。 但是,如果我打开它,我最终导出的video有重要的resize问题:
这就是我目前管理video镜像的方式:
if currentDevice == frontCamera { if let connection = output.connections.first { if connection.isVideoMirroringSupported { connection.automaticallyAdjustsVideoMirroring = false connection.isVideoMirrored = true //if true, this bug occurs. } } }else { //disabling photo mirroring on backCamera if let connection = output.connections.first { if connection.isVideoMirroringSupported { connection.automaticallyAdjustsVideoMirroring = false connection.isVideoMirrored = false } } }
这就是我导出video的方式:
/// Create AVMutableComposition object. This object will hold the AVMutableCompositionTrack instances. let mainMutableComposition = AVMutableComposition() /// Creating an empty video track let videoTrack = mainMutableComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid) let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0] do { //Adding the video track try videoTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: AVMediaType.video).first!, at: kCMTimeZero) } catch { completion(false, nil) } /// Adding audio if user wants to. if withAudio { do { //Adding the video track let audio = videoAsset.tracks(withMediaType: AVMediaType.audio).first if audio != nil { let audioTrack = mainMutableComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid) try audioTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration), of: audio!, at: kCMTimeZero) } } catch { completion(false, nil) } } // * MARK - Composition is ready ---------- // Create AVMutableVideoCompositionInstruction let compositionInstructions = AVMutableVideoCompositionInstruction() compositionInstructions.timeRange = CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration) // Create an AvmutableVideoCompositionLayerInstruction let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction.init(assetTrack: videoTrack!) videoLayerInstruction.setTransform(videoAssetTrack.preferredTransform, at: kCMTimeZero) compositionInstructions.layerInstructions = [videoLayerInstruction] //Add instructions let videoComposition = AVMutableVideoComposition() let naturalSize : CGSize = videoAssetTrack.naturalSize ///Rendering image into video let renderWidth = naturalSize.width let renderHeight = naturalSize.height //Assigning instructions and rendering size videoComposition.renderSize = CGSize(width: renderWidth, height: renderHeight) videoComposition.instructions = [compositionInstructions] videoComposition.frameDuration = CMTime(value: 1, timescale: Int32((videoTrack?.nominalFrameRate)!)) //Applying image to instruction self.applyVideoImage(to: videoComposition, withSize: naturalSize, image: image) // Getting the output path let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first let outputPath = documentsURL?.appendingPathComponent("lastEditedVideo.mp4") if FileManager.default.fileExists(atPath: (outputPath?.path)!) { do { try FileManager.default.removeItem(atPath: (outputPath?.path)!) } catch { completion(false, nil) } } // Create exporter let exporter = NextLevelSessionExporter(withAsset: mainMutableComposition) exporter.outputURL = outputPath exporter.outputFileType = AVFileType.mp4 exporter.videoComposition = videoComposition let compressionDict: [String: Any] = [ AVVideoAverageBitRateKey: NSNumber(integerLiteral: 2300000), AVVideoProfileLevelKey: AVVideoProfileLevelH264BaselineAutoLevel as String ] exporter.videoOutputConfiguration = [ AVVideoCodecKey: AVVideoCodecType.h264, AVVideoWidthKey: NSNumber(integerLiteral: Int(naturalSize.width)), AVVideoHeightKey: NSNumber(integerLiteral: Int(naturalSize.height)), AVVideoCompressionPropertiesKey: compressionDict ] exporter.audioOutputConfiguration = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVEncoderBitRateKey: NSNumber(integerLiteral: 128000), AVNumberOfChannelsKey: NSNumber(integerLiteral: 2), AVSampleRateKey: NSNumber(value: Float(44100)) ] completion(true, exporter) }
我正在使用NextLevelSessionExporter导出video。 如果我使用默认导出器无关紧要,则仍会出现resize问题。
有一个活动错误会阻止您正确导出镜像video。 您需要一些解决方法:
- 关闭movieOutputFile上的镜像
-
需要时手动水平翻转video:
if needsMirroring == true { var transform:CGAffineTransform = CGAffineTransform(scaleX: -1.0, y: 1.0) transform = transform.translatedBy(x: -naturalSize.width, y: 0.0) transform = transform.rotated(by: CGFloat(Double.pi/2)) transform = transform.translatedBy(x: 0.0, y: -naturalSize.width) videoTransform = transform }
我花了几天时间才弄清楚这一点,希望它有所帮助。