如何使用二进制数组 WebSocket 创建 TargetDataLine?

How to create a TargetDataLine using a binary array WebSocket?

我创建了一个字节数组 WebSocket,它从客户端的麦克风 (navigator.getUserMedia) 实时接收音频块。在 WebSocket 停止接收新字节数组一段时间后,我已经将此流记录到服务器中的 WAV 文件中。以下代码代表当前情况。

WebSocket

@OnMessage
public void message(byte[] b) throws IOException{
    if(byteOutputStream == null) {
        byteOutputStream = new ByteArrayOutputStream();
        byteOutputStream.write(b);
    } else {
        byteOutputStream.write(b);
    }
}

存储WAV文件的线程

public void store(){
    byte b[] = byteOutputStream.toByteArray();
    try {
        AudioFormat audioFormat = new AudioFormat(44100, 16, 1, true, true);
        ByteArrayInputStream byteStream = new ByteArrayInputStream(b);
        AudioInputStream audioStream = new AudioInputStream(byteStream, audioFormat, b.length);
        DateTime date = new DateTime();
        File file = new File("/tmp/"+date.getMillis()+ ".wav");
        AudioSystem.write(audioStream, AudioFileFormat.Type.WAVE, file);
        audioStream.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

但我使用此 WebSocket 的目标不是录制 WAV 文件,而是使用 YIN pitch detection algorithm implemented on TarsosDSP library. In other words, this is basically execute the PitchDetectorExample, but using the data from the WebSocket instead of the Default Audio Device (OS mic). The following code represents how PitchDetectorExample 实时处理音频,目前正在使用 OS 提供的麦克风线路初始化实时音频处理。

private void setNewMixer(Mixer mixer) throws LineUnavailableException, UnsupportedAudioFileException {      
    if(dispatcher!= null){
        dispatcher.stop();
    }
    currentMixer = mixer;
    float sampleRate = 44100;
    int bufferSize = 1024;
    int overlap = 0;
    final AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, true);
    final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, format);
    TargetDataLine line;
    line = (TargetDataLine) mixer.getLine(dataLineInfo);
    final int numberOfSamples = bufferSize;
    line.open(format, numberOfSamples);
    line.start();
    final AudioInputStream stream = new AudioInputStream(line);
    JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
    // create a new dispatcher
    dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);
    // add a processor
    dispatcher.addAudioProcessor(new PitchProcessor(algo, sampleRate, bufferSize, this));
    new Thread(dispatcher,"Audio dispatching").start();
}

有一种方法可以将 WebSocket 数据作为 TargetDataLine 来处理,因此可以将其与 AudioDispatcher and PitchProcessor 挂钩?不知何故,我需要将从 WebSocket 接收到的字节数组发送到音频处理线程。

欢迎提出有关如何实现此 objective 的其他想法。谢谢!

我不确定您是否需要 audioDispatcher。如果你知道字节是如何编码的(PCM,16bits le mono?)那么你可以将它们实时转换为浮点数并将它们提供给 pitchdetector 算法,在你的 websocket 中你可以做这样的事情(忘记输入流和 audiodispatcher):

 int index;
 byte[] buffer = new byte[2048];
 float[] floatBuffer = new float[1024];
 FastYin detector = new FastYin(44100,1024);
 public void message(byte[] b){
   for(int i = 0 ; i < b.length; i++){
     buffer[index] = b[i];
     index++
     if(index==2048){
       AudioFloatConverter converter = AudioFloatConverter.getConverter(new Format(16bits, little endian, mono,...));
       //converts the byte buffer to float
       converter.toFloatArray(buffer,floatBuffer);
       float pitch = detector.getPitch(floatBuffer);
       //here you have your pitch info that you can use
       index = 0;
     }
   }

您确实需要观察已通过的字节数:因为两个字节代表一个浮点数(如果使用 16 位 pcm 编码),您需要从偶数字节开始。字节顺序和采样率也很重要。

此致

净莲