您可以直接从 CMSampleBuffer 播放音频吗?

Can you play audio directly from a CMSampleBuffer?

我在 ARSession 期间捕获了麦克风音频,我希望将其传递给另一个 VC 并在捕获发生后播放,但应用程序仍然 运行(并且音频在内存)。

音频当前作为单个 CMSampleBuffer 捕获并通过 didOutputAudioSampleBuffer ARSessionDelegate 方法访问。

我以前使用过音频文件和 AVAudioPlayer,但我是 CMSampleBuffer 的新手。

有没有办法将原始缓冲区按原样播放?如果是这样,哪个 类 启用此功能?还是需要先 rendered/converted 转换成其他格式或文件?

这是缓冲区中数据的格式说明:

mediaType:'soun' 
    mediaSubType:'lpcm' 
    mediaSpecific: {
        ASBD: {
            mSampleRate: 44100.000000 
            mFormatID: 'lpcm' 
            mFormatFlags: 0xc 
            mBytesPerPacket: 2 
            mFramesPerPacket: 1 
            mBytesPerFrame: 2 
            mChannelsPerFrame: 1 
            mBitsPerChannel: 16     } 
        cookie: {(null)} 
        ACL: {Mono}
        FormatList Array: {
            Index: 0 
            ChannelLayoutTag: 0x640001 
            ASBD: {
            mSampleRate: 44100.000000 
            mFormatID: 'lpcm' 
            mFormatFlags: 0xc 
            mBytesPerPacket: 2 
            mFramesPerPacket: 1 
            mBytesPerFrame: 2 
            mChannelsPerFrame: 1 
            mBitsPerChannel: 16     }} 
    } 
    extensions: {(null)}

任何指导表示赞赏,因为 Apple 的文档在这个问题上并不清楚,SO 上的相关问题更多地涉及实时音频流而不是捕获和后续播放。

看来答案是否定的,你不能简单地保存和播放原始缓冲音频,需要先将其转换为更持久的东西。

看起来主要的方法是使用 AVAssetWriter 将缓冲区数据保存为音频文件,以便稍后使用 AVAudioPlayer 播放。

可以在录音的同时将麦克风传递给音频引擎,延迟最小:

let audioEngine = AVAudioEngine()
...
self.audioEngine.connect(self.audioEngine.inputNode,
    to: self.audioEngine.mainMixerNode, format: nil)
self.audioEngine.start()

如果样本缓冲液的使用很重要 -- 粗略地说,可以通过转换成PCM缓冲区来完成:

import AVFoundation

extension AVAudioPCMBuffer {
static func create(from sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer? {
    
    guard let description: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer),
          let sampleRate: Float64 = description.audioStreamBasicDescription?.mSampleRate,
          let channelsPerFrame: UInt32 = description.audioStreamBasicDescription?.mChannelsPerFrame /*,
     let numberOfChannels = description.audioChannelLayout?.numberOfChannels */
    else { return nil }
    
    guard let blockBuffer: CMBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
        return nil
    }
    
    let samplesCount = CMSampleBufferGetNumSamples(sampleBuffer)
    
    //let length: Int = CMBlockBufferGetDataLength(blockBuffer)
    
    let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: AVAudioChannelCount(1), interleaved: false)
    
    let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(samplesCount))!
    buffer.frameLength = buffer.frameCapacity
    
    // GET BYTES
    var dataPointer: UnsafeMutablePointer<Int8>?
    CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: nil, dataPointerOut: &dataPointer)
    
    guard var channel: UnsafeMutablePointer<Float> = buffer.floatChannelData?[0],
          let data = dataPointer else { return nil }
    
    var data16 = UnsafeRawPointer(data).assumingMemoryBound(to: Int16.self)
    
    for _ in 0...samplesCount - 1 {
        channel.pointee = Float32(data16.pointee) / Float32(Int16.max)
        channel += 1
        for _ in 0...channelsPerFrame - 1 {
            data16 += 1
        }
        
    }
    
    return buffer
}
}


 class BufferPlayer {

let audioEngine = AVAudioEngine()
let player = AVAudioPlayerNode()

deinit {
    self.audioEngine.stop()
}

init(withBuffer: CMSampleBuffer) {
    
    self.audioEngine.attach(self.player)
    
    self.audioEngine.connect(self.player,
                             to: self.audioEngine.mainMixerNode,
                             format: AVAudioPCMBuffer.create(from: withBuffer)!.format
    )
    
    _ = try? audioEngine.start()
}

func playEnqueue(buffer: CMSampleBuffer) {
    guard let bufferPCM = AVAudioPCMBuffer.create(from: buffer) else { return }
    
    self.player.scheduleBuffer(bufferPCM, completionHandler: nil)
    if !self.player.isPlaying { self.player.play() }
}

}