AVAssetWriter 到多个文件

AVAssetWriter to Multiple Files

我有一个 AVCaptureSession,它由一个 AVCaptureScreenInput 和一个 AVCaptureDeviceInput 组成。两者都连接为数据输出委托,我正在使用 AVAssetWriter 写入单个 MP4 文件。

写入单个 MP4 文件时,一切正常。当我尝试在多个 AVAssetWriters 之间切换以每 5 秒保存到连续文件时,通过 FFMPEG 将所有文件连接在一起时会出现轻微的音频下降。

加入视频的示例(注意每 5 秒出现一次小音频下降):

https://youtu.be/lrqD5dcbUXg

经过大量调查,我确定这可能是由于音频和视频片段 split/not 从同一时间戳开始。

我现在已经知道我的算法应该可以工作了,但是我不知道如何分割音频 CMBufferSample。看起来这可能很有用 CMSampleBufferCopySampleBufferForRange 但不确定如何根据时间进行拆分(想要一个包含该时间之前和之后所有样本的缓冲区)。

func getBufferUpToTime(sample: CMSampleBuffer, to: CMTime) -> CMSampleBuffer {
  var numSamples = CMSampleBufferGetNumSamples(sample)
  var sout: CMSampleBuffer?

  let endSampleIndex = // how do I get this?

  CMSampleBufferCopySampleBufferForRange(nil, sample, CFRangeMake(0, numSamples), &sout)

  return sout!
}

如果您使用的是 AVCaptureScreenInput,那么您没有使用 iOS,对吗?所以我打算写关于拆分样本缓冲区的文章,但后来我想起了 OSX, AVCaptureFileOutput.startRecording (not AVAssetWriter) 上有这个诱人的评论:

On Mac OS X, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the new file are guaranteed to be those contained in the sample buffer passed to that method.

不丢弃样本听起来很有希望,所以如果您可以使用 mov 而不是 mp4 文件,您应该能够通过使用 AVCaptureMovieFileOutput、实现 [=16] 来获得无音频丢失的结果=] 并从 didOutputSampleBuffer 调用 startRecording,像这样:

import Cocoa
import AVFoundation

@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {

    @IBOutlet weak var window: NSWindow!

    let session = AVCaptureSession()
    let movieFileOutput = AVCaptureMovieFileOutput()

    var movieChunkNumber = 0
    var chunkDuration = kCMTimeZero // TODO: synchronize access? probably fine.

    func startRecordingChunkFile() {
        let filename = String(format: "capture-%.2i.mov", movieChunkNumber)
        let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(filename)
        movieFileOutput.startRecording(to: url, recordingDelegate: self)

        movieChunkNumber += 1
    }

    func applicationDidFinishLaunching(_ aNotification: Notification) {
        let displayInput = AVCaptureScreenInput(displayID: CGMainDisplayID())

        let micInput = try! AVCaptureDeviceInput(device: AVCaptureDevice.default(for: .audio)!)

        session.addInput(displayInput)
        session.addInput(micInput)

        movieFileOutput.delegate = self

        session.addOutput(movieFileOutput)

        session.startRunning()

        self.startRecordingChunkFile()
    }
}

extension AppDelegate: AVCaptureFileOutputRecordingDelegate {
    func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
        // NSLog("error \(error)")
    }
}

extension AppDelegate: AVCaptureFileOutputDelegate {
    func fileOutputShouldProvideSampleAccurateRecordingStart(_ output: AVCaptureFileOutput) -> Bool {
        return true
    }

    func fileOutput(_ output: AVCaptureFileOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        if let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
            if CMFormatDescriptionGetMediaType(formatDescription) == kCMMediaType_Audio {
                let duration = CMSampleBufferGetDuration(sampleBuffer)
                chunkDuration = CMTimeAdd(chunkDuration, duration)

                if CMTimeGetSeconds(chunkDuration) >= 5 {
                    startRecordingChunkFile()
                    chunkDuration = kCMTimeZero
                }
            }
        }
    }
}