AVFoundation 在不同的视频上叠加文本,然后组合并导出
AVFoundation Overlay text on different videos, then combine and export
我在处理一些简单的 AVFoundation 问题时遇到了问题,似乎无法在 Stack 或互联网上找到答案。我正在尝试拍摄 8 个视频,分别在每个视频上叠加文字,然后将它们组合成一个完整的视频。我成功地组合了它们,但出于某种原因,我似乎无法掌握如何首先在它们上面添加文本层。
我一直在使用 Ray Wenderlich's tutorial,这太棒了。但是,我似乎无法弄清楚我的确切情况。下面是我到目前为止的代码,用于组合视频。感谢您的帮助!
var mainComposition = AVMutableComposition()
var videoCompositionTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
var audioCompositionTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
var insertTime = kCMTimeZero
var videoCompositionLocal = AVMutableVideoComposition()
for (index, playerItem) in enumerate(flipsArray) {
var videoAsset = playerItem.asset
var word = self.words![index]
let videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let videoTrack: AnyObject = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
videoCompositionTrack.insertTimeRange(videoTimeRange,
ofTrack: videoTrack as AVAssetTrack,
atTime: insertTime,
error: nil)
let audioTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let audioTrack: AnyObject = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
audioCompositionTrack.insertTimeRange(audioTimeRange,
ofTrack: audioTrack as AVAssetTrack,
atTime: insertTime,
error: nil)
insertTime = CMTimeAdd(insertTime, videoAsset.duration)
}
// 4 - Get path
let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,
.UserDomainMask,
true);
let documentsDirectory = paths[0] as NSString;
let myPathDocs = documentsDirectory.stringByAppendingPathComponent("flip-\(arc4random() % 1000).mov")
let url = NSURL.fileURLWithPath(myPathDocs)
// 5 - Create exporter
var exporter = AVAssetExportSession(asset: mainComposition,
presetName: AVAssetExportPresetMediumQuality)
println("-------------")
println(url)
println("-------------")
exporter.outputURL = url
exporter.outputFileType = AVFileTypeQuickTimeMovie
exporter.shouldOptimizeForNetworkUse = true
exporter.exportAsynchronouslyWithCompletionHandler({
switch exporter.status {
case AVAssetExportSessionStatus.Failed:
println("Merge/export failed: \(exporter.error)")
case AVAssetExportSessionStatus.Cancelled:
println("Merge/export cancelled: \(exporter.error)")
default:
println("Merge/export complete.")
self.exportDidFinish(exporter)
}
})
编辑:
我已经让文字覆盖在视频上。现在的问题是文本根本不会动画(改变文字)。我的目标是每 X 秒更改一次文本值,其中 x 是当前视频片段的长度。帮助!
我找到了解决问题的方法,如果有人觉得有用的话。而不是将所有 8 个视频拼接在一起,然后在导出之前在顶部应用动画单词层,定时切换单词...
...我单独导出了每个视频,并在其上叠加了文字。然后调用不同的方法将这8个新导出的视频拼接成一个视频。由于单词更改与资产的持续时间完全匹配,这对我来说非常有效。
希望这对某人有所帮助!
我在处理一些简单的 AVFoundation 问题时遇到了问题,似乎无法在 Stack 或互联网上找到答案。我正在尝试拍摄 8 个视频,分别在每个视频上叠加文字,然后将它们组合成一个完整的视频。我成功地组合了它们,但出于某种原因,我似乎无法掌握如何首先在它们上面添加文本层。
我一直在使用 Ray Wenderlich's tutorial,这太棒了。但是,我似乎无法弄清楚我的确切情况。下面是我到目前为止的代码,用于组合视频。感谢您的帮助!
var mainComposition = AVMutableComposition()
var videoCompositionTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
var audioCompositionTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
var insertTime = kCMTimeZero
var videoCompositionLocal = AVMutableVideoComposition()
for (index, playerItem) in enumerate(flipsArray) {
var videoAsset = playerItem.asset
var word = self.words![index]
let videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let videoTrack: AnyObject = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
videoCompositionTrack.insertTimeRange(videoTimeRange,
ofTrack: videoTrack as AVAssetTrack,
atTime: insertTime,
error: nil)
let audioTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
let audioTrack: AnyObject = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
audioCompositionTrack.insertTimeRange(audioTimeRange,
ofTrack: audioTrack as AVAssetTrack,
atTime: insertTime,
error: nil)
insertTime = CMTimeAdd(insertTime, videoAsset.duration)
}
// 4 - Get path
let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,
.UserDomainMask,
true);
let documentsDirectory = paths[0] as NSString;
let myPathDocs = documentsDirectory.stringByAppendingPathComponent("flip-\(arc4random() % 1000).mov")
let url = NSURL.fileURLWithPath(myPathDocs)
// 5 - Create exporter
var exporter = AVAssetExportSession(asset: mainComposition,
presetName: AVAssetExportPresetMediumQuality)
println("-------------")
println(url)
println("-------------")
exporter.outputURL = url
exporter.outputFileType = AVFileTypeQuickTimeMovie
exporter.shouldOptimizeForNetworkUse = true
exporter.exportAsynchronouslyWithCompletionHandler({
switch exporter.status {
case AVAssetExportSessionStatus.Failed:
println("Merge/export failed: \(exporter.error)")
case AVAssetExportSessionStatus.Cancelled:
println("Merge/export cancelled: \(exporter.error)")
default:
println("Merge/export complete.")
self.exportDidFinish(exporter)
}
})
编辑: 我已经让文字覆盖在视频上。现在的问题是文本根本不会动画(改变文字)。我的目标是每 X 秒更改一次文本值,其中 x 是当前视频片段的长度。帮助!
我找到了解决问题的方法,如果有人觉得有用的话。而不是将所有 8 个视频拼接在一起,然后在导出之前在顶部应用动画单词层,定时切换单词...
...我单独导出了每个视频,并在其上叠加了文字。然后调用不同的方法将这8个新导出的视频拼接成一个视频。由于单词更改与资产的持续时间完全匹配,这对我来说非常有效。
希望这对某人有所帮助!