修复使用AVMutableComposition拼接(合并)video时的方向

TLDR – 看编辑

我正在Swift中创build一个testing应用程序,我想使用AVMutableComposition从应用程序文档目录AVMutableComposition多个video拼接在一起。

在某种程度上,我已经取得了成功,我的所有video都被拼接在一起,所有内容都显示出正确的尺寸和风景。

但是,我的问题是, 所有video都以编辑中最后一个video的方向显示

我知道,要解决这个问题,我将需要为每个添加的轨道添加图层指令,但我似乎无法得到这个正确的答案,我发现整个编译似乎以纵向方向出现与景观video简单地缩放以适应纵向视图,所以当我将手机放在侧面来观看横向video时,由于它们被缩放为纵向尺寸,所以它们仍然很小。

这不是我正在寻找的结果,我希望预期的function,即如果一个video是横向,它显示缩放时,在纵向模式,但如果手机旋转,我希望该景观video填满屏幕(就像它只是在观看在照片中的风景video)和相同的肖像,以便在纵向观看时,它是全屏幕,当横向转动时,video缩放到横向大小(就像在查看照片中的肖像video时一样)。

总之,我希望得到的结果是,当查看具有横向和纵向video的编辑时,可以使用手机查看整个编辑,而横向video是全屏和缩放的,或者在查看同一个video时纵向纵向video是全屏幕和风景video缩放到大小。

有了所有的答案,我发现情况并非如此,当从照片导入video以添加到编辑中时,他们似乎都有非常意想不到的行为,并且在添加使用前置摄像头拍摄的video时也会出现相同的随机行为用我目前实现的从库导入的video清楚,“selfie”video出现在正确的大小没有这些问题)。

我正在寻找一种方法来旋转/缩放这些video,以便他们始终以正确的方向和比例显示,具体取决于用户握住手机的方式。

编辑 :我现在知道,我不能在一个单一的video都具有风景和肖像方向,所以我期待的预期结果将是在横向方向有最终的video。 我已经想出了如何使切换所有的方向和规模,以获得一切相同的方式,但我的输出是一个肖像video,如果任何人都可以帮助我改变这一点,所以我的输出是景观,将不胜感激。

下面是我的function来获取每个video的指令:

 func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform { var return_value:CGAffineTransform? let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0] let transform = assetTrack.preferredTransform let assetInfo = orientationFromTransform(transform) var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width if assetInfo.isPortrait { scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio) return_value = CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor) } else { let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio) var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2)) if assetInfo.orientation == .Down { let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI)) let windowBounds = UIScreen.mainScreen().bounds let yFix = assetTrack.naturalSize.height + windowBounds.height let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix) concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor) } return_value = concat } return return_value! } 

而出口商:

  // Create AVMutableComposition to contain all AVMutableComposition tracks let mix_composition = AVMutableComposition() var total_time = kCMTimeZero // Loop over videos and create tracks, keep incrementing total duration let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID()) var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track) for video in videos { let shortened_duration = CMTimeSubtract(video.duration, CMTimeMake(1,10)); let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0] do { try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, shortened_duration), ofTrack: videoAssetTrack , atTime: total_time) video_track.preferredTransform = videoAssetTrack.preferredTransform } catch _ { } instruction.setTransform(videoTransformForTrack(video), atTime: total_time) // Add video duration to total time total_time = CMTimeAdd(total_time, shortened_duration) } // Create main instrcution for video composition let main_instruction = AVMutableVideoCompositionInstruction() main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero, total_time) main_instruction.layerInstructions = [instruction] main_composition.instructions = [main_instruction] main_composition.frameDuration = CMTimeMake(1, 30) main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height) let exporter = AVAssetExportSession(asset: mix_composition, presetName: AVAssetExportPreset640x480) exporter!.outputURL = final_url exporter!.outputFileType = AVFileTypeMPEG4 exporter!.shouldOptimizeForNetworkUse = true exporter!.videoComposition = main_composition // 6 - Perform the Export exporter!.exportAsynchronouslyWithCompletionHandler() { // Assign return values based on success of export dispatch_async(dispatch_get_main_queue(), { () -> Void in self.exportDidFinish(exporter!) }) } 

对不起,我只是想确定自己对自己的问题非常清楚,因为其他答案对我没有帮助。

我不知道你的orientationFromTransform()给你正确的方向。

我想你会尝试修改它或尝试像这样的东西:

 extension AVAsset { func videoOrientation() -> (orientation: UIInterfaceOrientation, device: AVCaptureDevicePosition) { var orientation: UIInterfaceOrientation = .Unknown var device: AVCaptureDevicePosition = .Unspecified let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo) if let videoTrack = tracks.first { let t = videoTrack.preferredTransform if (ta == 0 && tb == 1.0 && td == 0) { orientation = .Portrait if tc == 1.0 { device = .Front } else if tc == -1.0 { device = .Back } } else if (ta == 0 && tb == -1.0 && td == 0) { orientation = .PortraitUpsideDown if tc == -1.0 { device = .Front } else if tc == 1.0 { device = .Back } } else if (ta == 1.0 && tb == 0 && tc == 0) { orientation = .LandscapeRight if td == -1.0 { device = .Front } else if td == 1.0 { device = .Back } } else if (ta == -1.0 && tb == 0 && tc == 0) { orientation = .LandscapeLeft if td == 1.0 { device = .Front } else if td == -1.0 { device = .Back } } } return (orientation, device) } }