使用 AVMutableComposition 拼接(合并)视频时固定方向
Fixing orientation when stitching (merging) videos using AVMutableComposition
TLDR - 查看编辑
我正在 Swift 中创建一个测试应用程序,我想使用 AVMutableComposition
.
从我的应用程序文档目录中将多个视频拼接在一起
我在这方面取得了一定的成功,我所有的视频都拼接在一起,所有内容都显示正确的纵向和横向尺寸。
但是,我的问题是,所有视频都以编辑中最后一个视频 的方向显示。
我知道要解决这个问题,我需要为我添加的每条轨道添加图层说明,但是我似乎无法做到这一点,根据我发现的答案,整个编译似乎都在一个横向视频的纵向方向只是缩放以适合纵向视图,所以当我将我的 phone 侧过来观看横向视频时,它们仍然很小,因为它们已被缩放到纵向尺寸。
这不是我想要的结果,我想要预期的功能,即如果视频是横向的,它在纵向模式下显示缩放,但如果 phone 旋转,我希望横向视频填充屏幕(就像在照片中简单地观看横向视频时一样)和纵向相同,以便在纵向观看时它是全屏的,而当横向观看时,视频被缩放到横向大小(就像在观看纵向视频时一样)在照片中)。
总而言之,我想要的期望结果是,在观看包含横向和纵向视频的合辑时,我可以在 phone 侧边查看整个合辑,横向视频是全屏的,纵向是缩放,或者当以纵向观看同一视频时,纵向视频是全屏的,而横向视频是按比例缩放的。
根据所有答案,我发现情况并非如此,当从照片中导入视频以添加到编辑中时,它们似乎都有非常意外的行为,而在添加使用前置摄像头(要清楚我当前从库中导入的实施视频和 "selfie" 视频以正确的尺寸显示而没有这些问题)。
我正在寻找一种方法来 rotate/scale 这些视频,以便它们始终以正确的方向和比例显示,具体取决于用户 phone 的握持方式。
编辑:我现在知道我不能在一个视频中同时拥有横向和纵向方向,所以我正在寻找的预期结果是横向的最终视频。我已经想出如何切换所有方向和比例以使所有内容都以相同的方式向上但我的输出是纵向视频如果有人可以帮助我改变它所以我的输出是横向它会很感激。
下面是我获取每个视频指令的函数:
func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform
{
var return_value:CGAffineTransform?
let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait
{
scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
return_value = CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor)
}
else
{
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
if assetInfo.orientation == .Down
{
let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
let windowBounds = UIScreen.mainScreen().bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
}
return_value = concat
}
return return_value!
}
出口商:
// Create AVMutableComposition to contain all AVMutableComposition tracks
let mix_composition = AVMutableComposition()
var total_time = kCMTimeZero
// Loop over videos and create tracks, keep incrementing total duration
let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track)
for video in videos
{
let shortened_duration = CMTimeSubtract(video.duration, CMTimeMake(1,10));
let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0]
do
{
try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, shortened_duration),
ofTrack: videoAssetTrack ,
atTime: total_time)
video_track.preferredTransform = videoAssetTrack.preferredTransform
}
catch _
{
}
instruction.setTransform(videoTransformForTrack(video), atTime: total_time)
// Add video duration to total time
total_time = CMTimeAdd(total_time, shortened_duration)
}
// Create main instrcution for video composition
let main_instruction = AVMutableVideoCompositionInstruction()
main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero, total_time)
main_instruction.layerInstructions = [instruction]
main_composition.instructions = [main_instruction]
main_composition.frameDuration = CMTimeMake(1, 30)
main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height)
let exporter = AVAssetExportSession(asset: mix_composition, presetName: AVAssetExportPreset640x480)
exporter!.outputURL = final_url
exporter!.outputFileType = AVFileTypeMPEG4
exporter!.shouldOptimizeForNetworkUse = true
exporter!.videoComposition = main_composition
// 6 - Perform the Export
exporter!.exportAsynchronouslyWithCompletionHandler()
{
// Assign return values based on success of export
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.exportDidFinish(exporter!)
})
}
抱歉,我的解释很长,我只是想确保我对我的问题非常清楚,因为其他答案对我没有用。
我不确定你的orientationFromTransform()
方向是否正确。
我想你尝试修改它或尝试类似的东西:
extension AVAsset {
func videoOrientation() -> (orientation: UIInterfaceOrientation, device: AVCaptureDevicePosition) {
var orientation: UIInterfaceOrientation = .Unknown
var device: AVCaptureDevicePosition = .Unspecified
let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo)
if let videoTrack = tracks.first {
let t = videoTrack.preferredTransform
if (t.a == 0 && t.b == 1.0 && t.d == 0) {
orientation = .Portrait
if t.c == 1.0 {
device = .Front
} else if t.c == -1.0 {
device = .Back
}
}
else if (t.a == 0 && t.b == -1.0 && t.d == 0) {
orientation = .PortraitUpsideDown
if t.c == -1.0 {
device = .Front
} else if t.c == 1.0 {
device = .Back
}
}
else if (t.a == 1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeRight
if t.d == -1.0 {
device = .Front
} else if t.d == 1.0 {
device = .Back
}
}
else if (t.a == -1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeLeft
if t.d == 1.0 {
device = .Front
} else if t.d == -1.0 {
device = .Back
}
}
}
return (orientation, device)
}
}
TLDR - 查看编辑
我正在 Swift 中创建一个测试应用程序,我想使用 AVMutableComposition
.
我在这方面取得了一定的成功,我所有的视频都拼接在一起,所有内容都显示正确的纵向和横向尺寸。
但是,我的问题是,所有视频都以编辑中最后一个视频 的方向显示。
我知道要解决这个问题,我需要为我添加的每条轨道添加图层说明,但是我似乎无法做到这一点,根据我发现的答案,整个编译似乎都在一个横向视频的纵向方向只是缩放以适合纵向视图,所以当我将我的 phone 侧过来观看横向视频时,它们仍然很小,因为它们已被缩放到纵向尺寸。
这不是我想要的结果,我想要预期的功能,即如果视频是横向的,它在纵向模式下显示缩放,但如果 phone 旋转,我希望横向视频填充屏幕(就像在照片中简单地观看横向视频时一样)和纵向相同,以便在纵向观看时它是全屏的,而当横向观看时,视频被缩放到横向大小(就像在观看纵向视频时一样)在照片中)。
总而言之,我想要的期望结果是,在观看包含横向和纵向视频的合辑时,我可以在 phone 侧边查看整个合辑,横向视频是全屏的,纵向是缩放,或者当以纵向观看同一视频时,纵向视频是全屏的,而横向视频是按比例缩放的。
根据所有答案,我发现情况并非如此,当从照片中导入视频以添加到编辑中时,它们似乎都有非常意外的行为,而在添加使用前置摄像头(要清楚我当前从库中导入的实施视频和 "selfie" 视频以正确的尺寸显示而没有这些问题)。
我正在寻找一种方法来 rotate/scale 这些视频,以便它们始终以正确的方向和比例显示,具体取决于用户 phone 的握持方式。
编辑:我现在知道我不能在一个视频中同时拥有横向和纵向方向,所以我正在寻找的预期结果是横向的最终视频。我已经想出如何切换所有方向和比例以使所有内容都以相同的方式向上但我的输出是纵向视频如果有人可以帮助我改变它所以我的输出是横向它会很感激。
下面是我获取每个视频指令的函数:
func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform
{
var return_value:CGAffineTransform?
let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait
{
scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
return_value = CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor)
}
else
{
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
if assetInfo.orientation == .Down
{
let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
let windowBounds = UIScreen.mainScreen().bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
}
return_value = concat
}
return return_value!
}
出口商:
// Create AVMutableComposition to contain all AVMutableComposition tracks
let mix_composition = AVMutableComposition()
var total_time = kCMTimeZero
// Loop over videos and create tracks, keep incrementing total duration
let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track)
for video in videos
{
let shortened_duration = CMTimeSubtract(video.duration, CMTimeMake(1,10));
let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0]
do
{
try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, shortened_duration),
ofTrack: videoAssetTrack ,
atTime: total_time)
video_track.preferredTransform = videoAssetTrack.preferredTransform
}
catch _
{
}
instruction.setTransform(videoTransformForTrack(video), atTime: total_time)
// Add video duration to total time
total_time = CMTimeAdd(total_time, shortened_duration)
}
// Create main instrcution for video composition
let main_instruction = AVMutableVideoCompositionInstruction()
main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero, total_time)
main_instruction.layerInstructions = [instruction]
main_composition.instructions = [main_instruction]
main_composition.frameDuration = CMTimeMake(1, 30)
main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height)
let exporter = AVAssetExportSession(asset: mix_composition, presetName: AVAssetExportPreset640x480)
exporter!.outputURL = final_url
exporter!.outputFileType = AVFileTypeMPEG4
exporter!.shouldOptimizeForNetworkUse = true
exporter!.videoComposition = main_composition
// 6 - Perform the Export
exporter!.exportAsynchronouslyWithCompletionHandler()
{
// Assign return values based on success of export
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.exportDidFinish(exporter!)
})
}
抱歉,我的解释很长,我只是想确保我对我的问题非常清楚,因为其他答案对我没有用。
我不确定你的orientationFromTransform()
方向是否正确。
我想你尝试修改它或尝试类似的东西:
extension AVAsset {
func videoOrientation() -> (orientation: UIInterfaceOrientation, device: AVCaptureDevicePosition) {
var orientation: UIInterfaceOrientation = .Unknown
var device: AVCaptureDevicePosition = .Unspecified
let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo)
if let videoTrack = tracks.first {
let t = videoTrack.preferredTransform
if (t.a == 0 && t.b == 1.0 && t.d == 0) {
orientation = .Portrait
if t.c == 1.0 {
device = .Front
} else if t.c == -1.0 {
device = .Back
}
}
else if (t.a == 0 && t.b == -1.0 && t.d == 0) {
orientation = .PortraitUpsideDown
if t.c == -1.0 {
device = .Front
} else if t.c == 1.0 {
device = .Back
}
}
else if (t.a == 1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeRight
if t.d == -1.0 {
device = .Front
} else if t.d == 1.0 {
device = .Back
}
}
else if (t.a == -1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeLeft
if t.d == 1.0 {
device = .Front
} else if t.d == -1.0 {
device = .Back
}
}
}
return (orientation, device)
}
}