如何同步 AVPlayer 和 MTKView

How to sync AVPlayer and MTKView

我有一个项目,用户可以在其中拍摄视频,然后向其中添加滤镜或更改亮度和对比度等基本设置。为此,我使用 BBMetalImage,基本上 returns MTKView 中的视频(在项目中命名为 BBMetalView)。

一切正常 - 我可以播放视频、添加滤镜和所需的效果,但没有音频。我 asked the author 关于这个,谁建议为此使用 AVPlayer(或 AVAudioPlayer)。所以我做了。但是,视频和音频不同步。可能首先是因为比特率不同,库的作者还提到帧率可能因过滤过程而不同(这消耗的时间是可变的):

The render view FPS is not exactly the same to the actual rate. Because the video source output frame is processed by filters and the filter process time is variable.

首先,我将视频裁剪为所需的宽高比 (4:5)。我将此文件 (480x600) 保存在本地,使用 AVVideoProfileLevelH264HighAutoLevel 作为 AVVideoProfileLevelKey。我的音频配置使用 NextLevelSessionExporter,具有以下设置:AVEncoderBitRateKey: 128000AVNumberOfChannelsKey: 2AVSampleRateKey: 44100.

然后,BBMetalImage 库获取这个保存的音频文件并提供一个 MTKView (BBMetalView) 来显示视频,允许我实时添加滤镜和效果。设置看起来像这样:

self.metalView = BBMetalView(frame: CGRect(x: 0, y: self.view.center.y - ((UIScreen.main.bounds.width * 1.25) / 2), width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.width * 1.25))
self.view.addSubview(self.metalView)
self.videoSource = BBMetalVideoSource(url: outputURL)
self.videoSource.playWithVideoRate = true
self.videoSource.audioConsumer = self.metalAudio
self.videoSource.add(consumer: self.metalView)
self.videoSource.add(consumer: self.videoWriter)
self.audioItem = AVPlayerItem(url: outputURL)                            
self.audioPlayer = AVPlayer(playerItem: self.audioItem)
self.playerLayer = AVPlayerLayer(player: self.audioPlayer)
self.videoPreview.layer.addSublayer(self.playerLayer!)
self.playerLayer?.frame = CGRect(x: 0, y: 0, width: 0, height: 0)
self.playerLayer?.backgroundColor = UIColor.black.cgColor
self.startVideo()

startVideo() 是这样的:

audioPlayer.seek(to: .zero)
audioPlayer.play()
videoSource.start(progress: { (frameTime) in
    print(frameTime)
}) { [weak self] (finish) in
guard let self = self else { return }
    self.startVideo()
}

由于外部 library/libraries,这一切可能都非常模糊。但是,我的问题很简单:有什么方法可以将 MTKView 与我的 AVPlayer 同步吗?它会对我有很大帮助,我相信 Silence-GitHub 也会将此功能实现到库中以帮助很多其他用户。欢迎任何关于如何解决这个问题的想法!

鉴于您的情况,您似乎需要尝试以下 2 种方法中的一种:

1) 尝试并应用某种对您的视频具有预期效果的叠加层。我可以尝试这样的事情,但我个人没有这样做过。

2) 这事先需要多一点时间 - 从某种意义上说,程序必须花一些时间(取决于您的过滤,时间会有所不同),以重新创建具有所需效果的新视频。你可以试试这个,看看它是否适合你。

我使用来自某处 SO 的一些源代码制作了自己的 VideoCreator。

//Recreates a new video with applied filter
    public static func createFilteredVideo(asset: AVAsset, completionHandler: @escaping (_ asset: AVAsset) -> Void) {
        let url = (asset as? AVURLAsset)!.url
        let snapshot = url.videoSnapshot()
        guard let image = snapshot else { return }
        let fps = Int32(asset.tracks(withMediaType: .video)[0].nominalFrameRate)
        let writer = VideoCreator(fps: Int32(fps), width: image.size.width, height: image.size.height, audioSettings: nil)

        let timeScale = asset.duration.timescale
        let timeValue = asset.duration.value
        let frameTime = 1/Double(fps) * Double(timeScale)
        let numberOfImages = Int(Double(timeValue)/Double(frameTime))
        let queue = DispatchQueue(label: "com.queue.queue", qos: .utility)
        let composition = AVVideoComposition(asset: asset) { (request) in
            let source = request.sourceImage.clampedToExtent()
            //This is where you create your filter and get your filtered result. 
            //Here is an example
            let filter = CIFilter(name: "CIBlendWithMask")
            filter!.setValue(maskImage, forKey: "inputMaskImage")
            filter!.setValue(regCIImage, forKey: "inputImage")
            let filteredImage = filter!.outputImage.clamped(to: source.extent)
            request.finish(with: filteredImage, context: nil)
        }

        var i = 0
        getAudioFromURL(url: url) { (buffer) in
            writer.addAudio(audio: buffer, time: .zero)
            i == 0 ? writer.startCreatingVideo(initialBuffer: buffer, completion: {}) : nil
            i += 1
        }

        let group = DispatchGroup()
        for i in 0..<numberOfImages {
            group.enter()
            autoreleasepool {
                let time = CMTime(seconds: Double(Double(i) * frameTime / Double(timeScale)), preferredTimescale: timeScale)
                let image = url.videoSnapshot(time: time, composition: composition)
                queue.async {

                    writer.addImageAndAudio(image: image!, audio: nil, time: time.seconds)
                    group.leave()
                }
            }
        }
        group.notify(queue: queue) {
            writer.finishWriting()
            let url = writer.getURL()

            //Now create exporter to add audio then do completion handler
            completionHandler(AVAsset(url: url))

        }
    }

    static func getAudioFromURL(url: URL, completionHandlerPerBuffer: @escaping ((_ buffer:CMSampleBuffer) -> Void)) {
        let asset = AVURLAsset(url: url, options: [AVURLAssetPreferPreciseDurationAndTimingKey: NSNumber(value: true as Bool)])

        guard let assetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else {
            fatalError("Couldn't load AVAssetTrack")
        }


        guard let reader = try? AVAssetReader(asset: asset)
            else {
                fatalError("Couldn't initialize the AVAssetReader")
        }
        reader.timeRange = CMTimeRange(start: .zero, duration: asset.duration)

        let outputSettingsDict: [String : Any] = [
            AVFormatIDKey: Int(kAudioFormatLinearPCM),
            AVLinearPCMBitDepthKey: 16,
            AVLinearPCMIsBigEndianKey: false,
            AVLinearPCMIsFloatKey: false,
            AVLinearPCMIsNonInterleaved: false
        ]
        let readerOutput = AVAssetReaderTrackOutput(track: assetTrack,
                                                    outputSettings: outputSettingsDict)
        readerOutput.alwaysCopiesSampleData = false
        reader.add(readerOutput)

        while reader.status == .reading {
            guard let readSampleBuffer = readerOutput.copyNextSampleBuffer() else { break }
            completionHandlerPerBuffer(readSampleBuffer)

        }
    }

extension URL {
    func videoSnapshot(time:CMTime? = nil, composition:AVVideoComposition? = nil) -> UIImage? {
        let asset = AVURLAsset(url: self)
        let generator = AVAssetImageGenerator(asset: asset)
        generator.appliesPreferredTrackTransform = true
        generator.requestedTimeToleranceBefore = .zero
        generator.requestedTimeToleranceAfter = .zero
        generator.videoComposition = composition

        let timestamp = time == nil ? CMTime(seconds: 1, preferredTimescale: 60) : time

        do {
            let imageRef = try generator.copyCGImage(at: timestamp!, actualTime: nil)
            return UIImage(cgImage: imageRef)
        }
        catch let error as NSError
        {
            print("Image generation failed with error \(error)")
            return nil
        }
    }
}

下面是VideoCreator

//
//  VideoCreator.swift
//  AKPickerView-Swift
//
//  Created by Impression7vx on 7/16/19.
//

import UIKit

import AVFoundation
import UIKit
import Photos

@available(iOS 11.0, *)
public class VideoCreator: NSObject {

    private var settings:RenderSettings!
    private var imageAnimator:ImageAnimator!

    public override init() {
        self.settings = RenderSettings()
        self.imageAnimator = ImageAnimator(renderSettings: self.settings)
    }

    public convenience init(fps: Int32, width: CGFloat, height: CGFloat, audioSettings: [String:Any]?) {
        self.init()
        self.settings = RenderSettings(fps: fps, width: width, height: height)
        self.imageAnimator = ImageAnimator(renderSettings: self.settings, audioSettings: audioSettings)
    }

    public convenience init(width: CGFloat, height: CGFloat) {
        self.init()
        self.settings = RenderSettings(width: width, height: height)
        self.imageAnimator = ImageAnimator(renderSettings: self.settings)
    }

    func startCreatingVideo(initialBuffer: CMSampleBuffer?, completion: @escaping (() -> Void)) {
        self.imageAnimator.render(initialBuffer: initialBuffer) {
            completion()
        }
    }

    func finishWriting() {
        self.imageAnimator.isDone = true
    }

    func addImageAndAudio(image:UIImage, audio:CMSampleBuffer?, time:CFAbsoluteTime) {
        self.imageAnimator.addImageAndAudio(image: image, audio: audio, time: time)
    }

    func getURL() -> URL {
        return settings!.outputURL
    }

    func addAudio(audio: CMSampleBuffer, time: CMTime) {
        self.imageAnimator.videoWriter.addAudio(buffer: audio, time: time)
    }
}


@available(iOS 11.0, *)
public struct RenderSettings {

    var width: CGFloat = 1280
    var height: CGFloat = 720
    var fps: Int32 = 2   // 2 frames per second
    var avCodecKey = AVVideoCodecType.h264
    var videoFilename = "video"
    var videoFilenameExt = "mov"

    init() { }

    init(width: CGFloat, height: CGFloat) {
        self.width = width
        self.height = height
    }

    init(fps: Int32) {
        self.fps = fps
    }

    init(fps: Int32, width: CGFloat, height: CGFloat) {
        self.fps = fps
        self.width = width
        self.height = height
    }

    var size: CGSize {
        return CGSize(width: width, height: height)
    }

    var outputURL: URL {
        // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
        // Using the CachesDirectory ensures the file won't be included in a backup of the app.
        let fileManager = FileManager.default
        if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
            return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
        }
        fatalError("URLForDirectory() failed")
    }
}

@available(iOS 11.0, *)
public class ImageAnimator {

    // Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
    static let kTimescale: Int32 = 600

    let settings: RenderSettings
    let videoWriter: VideoWriter
    var imagesAndAudio:SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)> = SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)>()
    var isDone:Bool = false
    let semaphore = DispatchSemaphore(value: 1)

    var frameNum = 0

    class func removeFileAtURL(fileURL: URL) {
        do {
            try FileManager.default.removeItem(atPath: fileURL.path)
        }
        catch _ as NSError {
            // Assume file doesn't exist.
        }
    }

    init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
        settings = renderSettings
        videoWriter = VideoWriter(renderSettings: settings, audioSettings: audioSettings)
    }

    func addImageAndAudio(image: UIImage, audio: CMSampleBuffer?, time:CFAbsoluteTime) {
        self.imagesAndAudio.append((image, audio, time))
//        print("Adding to array -- \(self.imagesAndAudio.count)")
    }

    func render(initialBuffer: CMSampleBuffer?, completion: @escaping ()->Void) {

        // The VideoWriter will fail if a file exists at the URL, so clear it out first.
        ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)

        videoWriter.start(initialBuffer: initialBuffer)
        videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
            //ImageAnimator.saveToLibrary(self.settings.outputURL)
            completion()
        }

    }

    // This is the callback function for VideoWriter.render()
    func appendPixelBuffers(writer: VideoWriter) -> Bool {

        //Don't stop while images are NOT empty
        while !imagesAndAudio.isEmpty || !isDone {

            if(!imagesAndAudio.isEmpty) {
                let date = Date()

                if writer.isReadyForVideoData == false {
                    // Inform writer we have more buffers to write.
//                    print("Writer is not ready for more data")
                    return false
                }

                autoreleasepool {
                    //This should help but truly doesn't suffice - still need a mutex/lock
                    if(!imagesAndAudio.isEmpty) {
                        semaphore.wait() // requesting resource
                        let imageAndAudio = imagesAndAudio.first()!
                        let image = imageAndAudio.0
//                        let audio = imageAndAudio.1
                        let time = imageAndAudio.2
                        self.imagesAndAudio.removeAtIndex(index: 0)
                        semaphore.signal() // releasing resource
                        let presentationTime = CMTime(seconds: time, preferredTimescale: 600)

//                        if(audio != nil) { videoWriter.addAudio(buffer: audio!) }
                        let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
                        if success == false {
                            fatalError("addImage() failed")
                        }
                        else {
//                            print("Added image @ frame \(frameNum) with presTime: \(presentationTime)")
                        }

                        frameNum += 1
                        let final = Date()
                        let timeDiff = final.timeIntervalSince(date)
//                        print("Time: \(timeDiff)")
                    }
                    else {
//                        print("Images was empty")
                    }
                }
            }
        }

        print("Done writing")
        // Inform writer all buffers have been written.
        return true
    }

}

@available(iOS 11.0, *)
public class VideoWriter {

    let renderSettings: RenderSettings
    var audioSettings: [String:Any]?
    var videoWriter: AVAssetWriter!
    var videoWriterInput: AVAssetWriterInput!
    var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
    var audioWriterInput: AVAssetWriterInput!
    static var ci:Int = 0
    var initialTime:CMTime!

    var isReadyForVideoData: Bool {
        return (videoWriterInput == nil ? false : videoWriterInput!.isReadyForMoreMediaData )
    }

    var isReadyForAudioData: Bool {
        return (audioWriterInput == nil ? false : audioWriterInput!.isReadyForMoreMediaData)
    }

    class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize, alpha:CGImageAlphaInfo) -> CVPixelBuffer? {

        var pixelBufferOut: CVPixelBuffer?

        let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
        if status != kCVReturnSuccess {
            fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
        }

        let pixelBuffer = pixelBufferOut!

        CVPixelBufferLockBaseAddress(pixelBuffer, [])

        let data = CVPixelBufferGetBaseAddress(pixelBuffer)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                                bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: alpha.rawValue)

        context!.clear(CGRect(x: 0, y: 0, width: size.width, height: size.height))

        let horizontalRatio = size.width / image.size.width
        let verticalRatio = size.height / image.size.height
        //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
        let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

        let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

        let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
        let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

        let cgImage = image.cgImage != nil ? image.cgImage! : image.ciImage!.convertCIImageToCGImage()

        context!.draw(cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))

        CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
        return pixelBuffer
    }

    @available(iOS 11.0, *)
    init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
        self.renderSettings = renderSettings
        self.audioSettings = audioSettings
    }

    func start(initialBuffer: CMSampleBuffer?) {

        let avOutputSettings: [String: AnyObject] = [
            AVVideoCodecKey: renderSettings.avCodecKey as AnyObject,
            AVVideoWidthKey: NSNumber(value: Float(renderSettings.width)),
            AVVideoHeightKey: NSNumber(value: Float(renderSettings.height))
        ]

        let avAudioSettings = audioSettings

        func createPixelBufferAdaptor() {
            let sourcePixelBufferAttributesDictionary = [
                kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
                kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.width)),
                kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.height))
            ]
            pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                      sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
        }

        func createAssetWriter(outputURL: URL) -> AVAssetWriter {
            guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mov) else {
                fatalError("AVAssetWriter() failed")
            }

            guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
                fatalError("canApplyOutputSettings() failed")
            }

            return assetWriter
        }

        videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
        videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)
//        if(audioSettings != nil) {
        audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
        audioWriterInput.expectsMediaDataInRealTime = true
//        }

        if videoWriter.canAdd(videoWriterInput) {
            videoWriter.add(videoWriterInput)
        }
        else {
            fatalError("canAddInput() returned false")
        }

//        if(audioSettings != nil) {
            if videoWriter.canAdd(audioWriterInput) {
                videoWriter.add(audioWriterInput)
            }
            else {
                fatalError("canAddInput() returned false")
            }
//        }

        // The pixel buffer adaptor must be created before we start writing.
        createPixelBufferAdaptor()

        if videoWriter.startWriting() == false {
            fatalError("startWriting() failed")
        }


        self.initialTime = initialBuffer != nil ? CMSampleBufferGetPresentationTimeStamp(initialBuffer!) : CMTime.zero
        videoWriter.startSession(atSourceTime: self.initialTime)

        precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
    }

    func render(appendPixelBuffers: @escaping (VideoWriter)->Bool, completion: @escaping ()->Void) {

        precondition(videoWriter != nil, "Call start() to initialze the writer")

        let queue = DispatchQueue(__label: "mediaInputQueue", attr: nil)
        videoWriterInput.requestMediaDataWhenReady(on: queue) {
            let isFinished = appendPixelBuffers(self)
            if isFinished {
                self.videoWriterInput.markAsFinished()
                self.videoWriter.finishWriting() {
                    DispatchQueue.main.async {
                        print("Done Creating Video")
                        completion()
                    }
                }
            }
            else {
                // Fall through. The closure will be called again when the writer is ready.
            }
        }
    }

    func addAudio(buffer: CMSampleBuffer, time: CMTime) {
        if(isReadyForAudioData) {
            print("Writing audio \(VideoWriter.ci) of a time of \(CMSampleBufferGetPresentationTimeStamp(buffer))")
            let duration = CMSampleBufferGetDuration(buffer)
            let offsetBuffer = CMSampleBuffer.createSampleBuffer(fromSampleBuffer: buffer, withTimeOffset: time, duration: duration)
            if(offsetBuffer != nil) {
                print("Added audio")
                self.audioWriterInput.append(offsetBuffer!)
            }
            else {
                print("Not adding audio")
            }
        }

        VideoWriter.ci += 1
    }

    func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {

        precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
        //1
        let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size, alpha: CGImageAlphaInfo.premultipliedFirst)!

        return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime + self.initialTime)
    }
}

我对此进行了更深入的研究 - 虽然我可以更新我的答案,但我宁愿在一个新区域打开这个切线来分离这些想法。 Apple 声明我们可以使用 AVVideoComposition 到 "To use the created video composition for playback, create an AVPlayerItem object from the same asset used as the composition’s source, then assign the composition to the player item’s videoComposition property. To export the composition to a new movie file, create an AVAssetExportSession object from the same source asset, then assign the composition to the export session’s videoComposition property."。

https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest

因此,您可以尝试将 AVPlayer 用于原始 URL。然后尝试应用您的过滤器。

let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in

    // Clamp to avoid blurring transparent pixels at the image edges
    let source = request.sourceImage.imageByClampingToExtent()
    filter.setValue(source, forKey: kCIInputImageKey)

    // Vary filter parameters based on video timing
    let seconds = CMTimeGetSeconds(request.compositionTime)
    filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)

    // Crop the blurred output to the bounds of the original image
    let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)

    // Provide the filter output to the composition
    request.finishWithImage(output, context: nil)
})

let asset = AVAsset(url: originalURL)
let item = AVPlayerItem(asset: asset)
item.videoComposition = composition
let player = AVPlayer(playerItem: item)

我相信您知道从这里可以做什么。这可能允许您进行 "Real-time" 过滤。我认为一个潜在的问题是,这个 运行 与您原来的问题相同,而它仍然需要一定的时间才能 运行 每帧并导致音频和视频之间的延迟.然而,这可能不会发生。如果你确实让这个工作正常,一旦用户选择了他们的过滤器,你就可以使用 AVAssetExportSession 导出特定的 videoComposition.

更多 here 如果您需要帮助!

我按如下方式自定义 BBMetalVideoSource 然后它起作用了:

  1. 在 BBMetalVideoSource 中创建一个委托以获取我们要与之同步的音频播放器的当前时间
  2. 在 func private func processAsset(progress:, completion:) 中,我将此代码块 if useVideoRate { //... } 替换为:

    if useVideoRate {
        if let playerTime = delegate.getAudioPlayerCurrentTime() {
            let diff = CMTimeGetSeconds(sampleFrameTime) - playerTime
            if diff > 0.0 {
                sleepTime = diff
                if sleepTime > 1.0 {
                    sleepTime = 0.0
                }
                usleep(UInt32(1000000 * sleepTime))
            } else {
                sleepTime = 0
            }
        }
    }
    

这段代码帮助我们解决了两个问题:1. 预览视频效果时没有音频,2. 音频与视频同步。