如何使用 js-mp3 解码 MP3 并在 AudioContext 中播放?

How do I decode MP3 with js-mp3 and play in AudioContext?

由于 Safari 15 中的错误有时会导致 AudioContext.decodeAudioData 失败(请参阅 Safari 15 fails to decode audio data that previous versions decoded without problems) for normal MP3 files I'm trying to do a workaround. The workaround is decoding the files with the library https://github.com/soundbus-technologies/js-mp3 ,然后根据该数据创建 AudioBuffer 并播放它。

问题是 js-mp3 returns 一个 ArrayBuffer 带有 PCM 数据,创建一个 AudioBuffer 需要两个单独的数组,每个通道一个,以及 sampleRate 和 sample帧长度。到目前为止我得到的是:


function concatTypedArrays(a, b) { // a, b TypedArray of same type
    var c = new (a.constructor)(a.length + b.length);
    c.set(a, 0);
    c.set(b, a.length);
    return c;
};

// responseData is an ArrayBuffer with the MP3 file...
let decoder = Mp3.newDecoder(responseData);
let pcmArrayBuffer = decoder.decode();

//Trying to read the frames to get the two channels. Maybe get it correctly from
//the pcmArrayBuffer instead?
    
decoder.source.pos = 0;
let left = new Float32Array(), right = new Float32Array();
console.log('Frame count: ' + decoder.frameStarts.length);
let result;
let i = 0;
let samplesDecoded = 0;
                    
while (true) {

    let result = decoder.readFrame();
    if (result.err) {
        break;
    } else {
        console.log('READ FRAME ' + (++i));
        samplesDecoded += 1152; //Think this is the right sample count per frame for MPEG1 files
        left = concatTypedArrays(left, decoder.frame.v_vec[0]);
        right = concatTypedArrays(left, decoder.frame.v_vec[1]);
    }
}

let audioContext = new AudioContext();
let buffer = audioContext.createBuffer(2, samplesDecoded, decoder.sampleRate);
let source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(0);

现在,这种方法有效,因为我确实听到了声音,而且我能听到它们是正确的声音,但它们被奇怪地扭曲了。我尝试播放的示例声音文件是 https://cardgames.io/mahjong/sounds/selecttile.mp3

知道这里出了什么问题吗?或者如何将从 .decode() 函数返回的单个 PCM 数组缓冲区正确转换为正确播放所需的格式?

上面fdcpp链接的例子表明,decoder.decode()返回的ArrayBuffer可以用来将其写入WAV文件而无需任何进一步修改。这意味着数据必须是交错的 PCM 数据。

因此,在将数据转换回浮点值时它应该可以工作。此外,它必须按照网络音频 API.

的预期放入平面阵列中
const interleavedPcmData = new DataView(pcmArrayBuffer);
const numberOfChannels = decoder.frame.header.numberOfChannels();
const audioBuffer = new AudioBuffer({
    length: pcmArrayBuffer.byteLength / 2 / numberOfChannels,
    numberOfChannels,
    sampleRate: decoder.sampleRate
});
const planarChannelDatas = [];

for (let i = 0; i < numberOfChannels; i += 1) {
    planarChannelDatas.push(audioBuffer.getChannelData(i));
}

for (let i = 0; i < interleavedPcmData.byteLength; i += 2) {
    const channelNumber = i / 2 % numberOfChannels;
    const value = interleavedPcmData.getInt16(i, true);

    planarChannelDatas[channelNumber][Math.floor(i / 2 / numberOfChannels)]
        = value < 0
            ? value / 32768
            : value / 32767;
}