AVMutableAudioMix 多个音量更改为单个轨道

AVMutableAudioMix multiple volume changes to single track

我正在开发一款可将多个视频剪辑合并为一个最终视频的应用程序。如果需要,我想让用户能够将单个剪辑静音(因此,只有部分最终合并视频会被静音)。我已经将 AVAssets 包装在一个名为 "Video" 的 class 中,它有一个 "shouldMute" 属性.

我的问题是,当我将其中一个 AVAssetTrack 的音量设置为零时,它会在最终视频的剩余部分保持静音。这是我的代码:

    var completeDuration : CMTime = CMTimeMake(0, 1)
    var insertTime = kCMTimeZero
    var layerInstructions = [AVVideoCompositionLayerInstruction]()
    let mixComposition = AVMutableComposition()
    let audioMix = AVMutableAudioMix()

    let videoTrack =
        mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
                                       preferredTrackID: kCMPersistentTrackID_Invalid)
    let audioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)


    // iterate through video assets and merge together
    for (i, video) in clips.enumerated() {

        let videoAsset = video.asset
        var clipDuration = videoAsset.duration

        do {
            if video == clips.first {
                insertTime = kCMTimeZero
            } else {
                insertTime = completeDuration
            }


            if let videoAssetTrack = videoAsset.tracks(withMediaType: .video).first {
                try videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: videoAssetTrack, at: insertTime)
                completeDuration = CMTimeAdd(completeDuration, clipDuration)
            }

            if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
                try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)

                if video.shouldMute {
                    let audioMixInputParams = AVMutableAudioMixInputParameters()
                    audioMixInputParams.trackID = audioTrack!.trackID
                    audioMixInputParams.setVolume(0.0, at: insertTime)
                    audioMix.inputParameters.append(audioMixInputParams)
                }
            }

        } catch let error as NSError {
            print("error: \(error)")
        }

        let videoInstruction = videoCompositionInstructionForTrack(track: videoTrack!, video: video)
        if video != clips.last{
            videoInstruction.setOpacity(0.0, at: completeDuration)
        }

        layerInstructions.append(videoInstruction)
        } // end of video asset iteration

如果我在剪辑末尾添加另一个 setVolume:atTime 指令将音量增加回 1.0,那么第一个音量指令将被完全忽略,整个视频将以最大音量播放。

换句话说,这不起作用:

if video.shouldMute {
                    let audioMixInputParams = AVMutableAudioMixInputParameters()
                    audioMixInputParams.trackID = audioTrack!.trackID
                    audioMixInputParams.setVolume(0.0, at: insertTime)
                    audioMixInputParams.setVolume(1.0, at: completeDuration)
                    audioMix.inputParameters.append(audioMixInputParams)
                }

我已经在我的 AVPlayerItem 和 AVAssetExportSession 上设置了 audioMix。我究竟做错了什么?我该怎么做才能让用户在合并到最终视频之前将单个剪辑的时间范围静音?

显然我做错了。正如您在上面看到的,我的合成有两个 AVMutableCompositionTracks:一个视频轨道和一个音频轨道。尽管我将一系列其他轨道的时间范围插入到那两个轨道中,但最终仍然只有两个轨道。所以,我只需要一个 AVMutableAudioMixInputParameters 对象来关联我的一个音轨。

我初始化了一个 AVMutableAudioMixInputParameters 对象,然后在插入每个剪辑的时间范围后,我会检查它是否应该静音并为剪辑的时间范围设置音量斜坡(时间范围在与整个音轨的关系)。这是我的剪辑迭代中的样子:

if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
                try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)

                if video.shouldMute {
                    audioMixInputParams.setVolumeRamp(fromStartVolume: 0.0, toEndVolume: 0.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
                } else {
                    audioMixInputParams.setVolumeRamp(fromStartVolume: 1.0, toEndVolume: 1.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
                }
            }