AudioKit - 如何从麦克风获取实时 floatChannelData?

AudioKit - How to get Real Time floatChannelData from Microphone?

我是 Audiokit 的新手,我正在尝试对来自麦克风的输入音频进行一些实时数字信号处理。

我知道我要的数据在AKAudioFile的FloatChannelData中,但是如果我想实时获取呢?我目前正在使用 AKMicrophone、AKFrequencyTracker、AKNodeOutputPlot、AKBooster,并且正在绘制跟踪器的振幅数据。但是,该数据与音频信号不同(如您所知,它是 RMS)。有什么办法可以从麦克风获取信号的浮点数据吗?或者甚至来自 AKNodeOutputPlot?我只需要读取权限。

AKSettings.audioInputEnabled = true
mic = AKMicrophone()
plot = AKNodeOutputPlot(mic, frame: audioInputPlot.bounds)
tracker = AKFrequencyTracker.init(mic)
silence = AKBooster(tracker,gain:0)
AudioKit.output = silence
AudioKit.start()

推荐的创建者here

AKNodeOutputPlot works, its one short file. You're basically just tapping the node and grabbing the data.

如果有绘图实例 (AKNodeOutputPlot)、麦克风 (AKMicrophone) 并想将这些值输出到标签,这在我的 viewController 中如何工作?

点击您想要从中获取数据的任何节点。我在上面的引用中使用了 AKNodeOutputPlot,因为它非常简单,只需将该数据用作绘图的输入,但您可以获取数据并对其进行任何操作。在这段代码中(来自 AKNodeOutputPlot):

internal func setupNode(_ input: AKNode?) {
    if !isConnected {
        input?.avAudioNode.installTap(
            onBus: 0,
            bufferSize: bufferSize,
            format: nil) { [weak self] (buffer, _) in

                guard let strongSelf = self else {
                    AKLog("Unable to create strong reference to self")
                    return
                }
                buffer.frameLength = strongSelf.bufferSize
                let offset = Int(buffer.frameCapacity - buffer.frameLength)
                if let tail = buffer.floatChannelData?[0] {
                    strongSelf.updateBuffer(&tail[offset], withBufferSize: strongSelf.bufferSize)
                }
        }
    }
    isConnected = true
}

实时获取缓冲区数据。在这里,我们只是将它发送到 "updateBuffer" 它被绘制的地方,但不是绘制你会做其他事情。

要完成 Aurelius Prochazka 的回答:

要录制流经节点的音频,您需要在其上附加一个水龙头。 tap 只是一个闭包,每次缓冲区可用时都会调用它。

这是一个示例代码,您可以在自己的代码中重复使用 class:

var mic = AKMicrophone()

func initMicrophone() {

  // Facultative, allow to set the sampling rate of the microphone
  AKSettings.sampleRate = 44100

  // Link the microphone note to the output of AudioKit with a volume of 0.
  AudioKit.output = AKBooster(mic, gain:0)

  // Start AudioKit engine
  try! AudioKit.start()

  // Add a tap to the microphone
  mic?.avAudioNode.installTap(
      onBus: audioBus, bufferSize: 4096, format: nil // I choose a buffer size of 4096
  ) { [weak self] (buffer, _) in //self is now a weak reference, to prevent retain cycles

      // We try to create a strong reference to self, and name it strongSelf
      guard let strongSelf = self else {
        print("Recorder: Unable to create strong reference to self #1")
        return
      }

      // We look at the buffer if it contains data
      buffer.frameLength = strongSelf.bufferSize
      let offset = Int(buffer.frameCapacity - buffer.frameLength)
      if let tail = buffer.floatChannelData?[0] {
        // We convert the content of the buffer to a swift array
        let samples = Array(UnsafeBufferPointer(start: &tail[offset], count: 4096))
        strongSelf.myFunctionHandlingData(samples)
      }
  }

  func myFunctionhandlingData(data: [Float]) {
    // ...
  }

如果您需要在不同线程之间对此数据进行交互,请小心使用 DispatchQueue 或其他同步机制。 就我而言,我确实使用 :

DispatchQueue.main.async { [weak self]  in
  guard let strongSelf = self else {
    print("Recorder: Unable to create strong reference to self #2")
    return
  }
  strongSelf.myFunctionHandlingData(samples)
}

所以我的函数运行在主线程中。