将 AVAudioSourceNode 连接到 AVAudioSinkNode 不起作用

Connecting AVAudioSourceNode to AVAudioSinkNode does not work

上下文

我正在使用 AVAudioEngine 编写一个信号解释器,它将分析麦克风输入。在开发过程中,我想使用默认输入缓冲区,这样我就不必为麦克风制造噪音来测试我的更改。 我正在使用 Catalyst 进行开发。

问题

我正在使用 AVAudioSinkNode to get the sound buffer (the performance is allegedly better than using .installTap). I am using (a subclass of) AVAudioSourceNode 生成正弦波。当我将这两者连接在一起时,我希望调用接收器节点的回调,但事实并非如此。也没有调用源节点的渲染块。

let engine = AVAudioEngine()

let output = engine.outputNode
let outputFormat = output.inputFormat(forBus: 0)
let sampleRate = Float(outputFormat.sampleRate)

let sineNode440 = AVSineWaveSourceNode(
    frequency: 440,
    amplitude: 1,
    sampleRate: sampleRate
)

let sink = AVAudioSinkNode { _, frameCount, audioBufferList -> OSStatus in
    print("[SINK] + \(frameCount) \(Date().timeIntervalSince1970)")
    return noErr
}

engine.attach(sineNode440)
engine.attach(sink)
engine.connect(sineNode440, to: sink, format: nil)

try engine.start()

附加测试

如果我将 engine.inputNode 连接到接收器(即 engine.connect(engine.inputNode, to: sink, format: nil)),则会按预期调用接收器回调。

当我将 sineNode440 连接到 engine.outputNode 时,我可以听到声音并且按预期调用渲染块。 因此,当连接到设备 input/output 时,源和接收器都单独工作,但不能一起工作。

AVSineWaveSourceNode

对问题不重要但相关:AVSineWaveSourceNode 基于 Apple sample code。此节点在连接到 engine.outputNode.

时会产生正确的声音
class AVSineWaveSourceNode: AVAudioSourceNode {

    /// We need this separate class to be able to inject the state in the render block.
    class State {
        let amplitude: Float
        let phaseIncrement: Float
        var phase: Float = 0

        init(frequency: Float, amplitude: Float, sampleRate: Float) {
            self.amplitude = amplitude
            phaseIncrement = (2 * .pi / sampleRate) * frequency
        }
    }

    let state: State

    init(frequency: Float, amplitude: Float, sampleRate: Float) {
        let state = State(
            frequency: frequency,
            amplitude: amplitude,
            sampleRate: sampleRate
        )
        self.state = state

        let format = AVAudioFormat(standardFormatWithSampleRate: Double(sampleRate), channels: 1)!

        super.init(format: format, renderBlock: { isSilence, _, frameCount, audioBufferList -> OSStatus in
            print("[SINE GENERATION \(frequency) - \(frameCount)]")
            let tau = 2 * Float.pi
            let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
            for frame in 0..<Int(frameCount) {
                // Get signal value for this frame at time.
                let value = sin(state.phase) * amplitude
                // Advance the phase for the next frame.
                state.phase += state.phaseIncrement
                if state.phase >= tau {
                    state.phase -= tau
                }
                if state.phase < 0.0 {
                    state.phase += tau
                }
                // Set the same value on all channels (due to the inputFormat we have only 1 channel though).
                for buffer in ablPointer {
                    let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
                    buf[frame] = value
                }
            }

            return noErr
        })

        for i in 0..<self.numberOfInputs {
            print("[SINEWAVE \(frequency)] BUS \(i) input format: \(self.inputFormat(forBus: i))")
        }

        for i in 0..<self.numberOfOutputs {
            print("[SINEWAVE \(frequency)] BUS \(i) output format: \(self.outputFormat(forBus: i))")
        }
    }
}

outputNode在正常配置AVAudioEngine时驱动音频处理图("online")。 outputNode 从其输入节点拉取音频,输入节点从其输入节点拉取音频,等等。当您将 sineNodesink 相互连接而不连接到 outputNodesink 的输出总线或 outputNode 的输入总线没有任何连接,因此当硬件从 outputNode 请求音频时,它无处可得。

如果我理解正确的话,我认为你可以通过摆脱 sink、将 sineNode 连接到 outputNode 和 运行 来完成你想做的事情AVAudioEnginemanual rendering mode. In manual rendering mode you pass a manual render block to receive audio (similar to AVAudioSinkNode) and drive the graph manually by calling renderOffline(_:to:).