将立体声音频文件拆分为每个通道的 AudioNodes
Split stereo audio file Into AudioNodes for each channel
我如何将立体声音频文件(我目前正在使用 WAV,但我也想知道如何将其用于 MP3,如果不同的话)分成左右声道以馈入来自 P5.sound.js 库的两个独立的快速傅立叶变换 (FFT)。
我已经在下面的代码中写出了我认为我需要做的事情,但是我无法通过 Google 搜索找到任何人这样做的例子,我所有外行的尝试都是什么也没找到。
我将在下面分享我所拥有的,但老实说,并不多。所有有问题的东西都会在我做笔记的设置函数中:
//variable for the p5 sound object
var sound = null;
var playing = false;
function preload(){
sound = loadSound('assets/leftRight.wav');
}
function setup(){
createCanvas(windowWidth, windowHeight);
background(0);
// I need to do something here to split the audio and return a AudioNode for just
// the left stereo channel. I have a feeling it's something like
// feeding audio.getBlob() to a FileReader() and some manipulation and then converting
// the result of FileReader() to a web audio API source node and feeding that into
// fft.setInput() like justTheLeftChannel is below, but I'm not understanding how to work
// with javascript audio methods and createChannelSplitter() and the attempts I've made
// have just turned up nothing.
fft = new p5.FFT();
fft.setInput(justTheLeftChannel);
}
function draw(){
sound.pan(-1)
background(0);
push();
noFill();
stroke(255, 0, 0);
strokeWeight(2);
beginShape();
//calculate the waveform from the fft.
var wave = fft.waveform();
for (var i = 0; i < wave.length; i++){
//for each element of the waveform map it to screen
//coordinates and make a new vertex at the point.
var x = map(i, 0, wave.length, 0, width);
var y = map(wave[i], -1, 1, 0, height);
vertex(x, y);
}
endShape();
pop();
}
function mouseClicked(){
if (!playing){
sound.loop();
playing = true;
} else {
sound.stop();
playing = false;
}
}
解决方案:
我不是 p5.js
专家,但我已经使用它足够多了,所以我认为必须有一种方法可以做到这一点,而无需整个 blob/文件读取过程。这些文档对于复杂的处理不是很有帮助,所以我在 p5.Sound
源代码中挖掘了一下,这就是我想出的:
// left channel
sound.setBuffer([sound.buffer.getChannelData(0)]);
// right channel
sound.setBuffer([sound.buffer.getChannelData(1)]);
Here's a working example - 单击 canvas 可在 L/stereo/R 音频播放和 FFT 视觉效果之间切换。
解释:
p5.SoundFile
有一个setBuffer
方法可以用来修改声音文件对象的音频内容。函数签名指定它接受一个缓冲区对象数组,如果该数组只有一个项目,它将产生一个单声道源——它已经采用正确的格式提供给 FFT!那么我们如何产生一个只包含一个通道数据的缓冲区呢?
在整个源代码中,有通过 sound.buffer.getChannelData()
. I was wary of accessing undocumented properties at first, but it turns out that since p5.Sound
uses the WebAudio API under the hood, this buffer
is really just plain old WebAudio AudioBuffer, and the getChannelData
method is well-documented.
进行单个通道操作的示例
上述方法的唯一缺点是 setBuffer
直接作用于 SoundFile
所以我要为每个要分离的通道再次加载文件,但我确定有解决方法。
分裂快乐!
我如何将立体声音频文件(我目前正在使用 WAV,但我也想知道如何将其用于 MP3,如果不同的话)分成左右声道以馈入来自 P5.sound.js 库的两个独立的快速傅立叶变换 (FFT)。
我已经在下面的代码中写出了我认为我需要做的事情,但是我无法通过 Google 搜索找到任何人这样做的例子,我所有外行的尝试都是什么也没找到。
我将在下面分享我所拥有的,但老实说,并不多。所有有问题的东西都会在我做笔记的设置函数中:
//variable for the p5 sound object
var sound = null;
var playing = false;
function preload(){
sound = loadSound('assets/leftRight.wav');
}
function setup(){
createCanvas(windowWidth, windowHeight);
background(0);
// I need to do something here to split the audio and return a AudioNode for just
// the left stereo channel. I have a feeling it's something like
// feeding audio.getBlob() to a FileReader() and some manipulation and then converting
// the result of FileReader() to a web audio API source node and feeding that into
// fft.setInput() like justTheLeftChannel is below, but I'm not understanding how to work
// with javascript audio methods and createChannelSplitter() and the attempts I've made
// have just turned up nothing.
fft = new p5.FFT();
fft.setInput(justTheLeftChannel);
}
function draw(){
sound.pan(-1)
background(0);
push();
noFill();
stroke(255, 0, 0);
strokeWeight(2);
beginShape();
//calculate the waveform from the fft.
var wave = fft.waveform();
for (var i = 0; i < wave.length; i++){
//for each element of the waveform map it to screen
//coordinates and make a new vertex at the point.
var x = map(i, 0, wave.length, 0, width);
var y = map(wave[i], -1, 1, 0, height);
vertex(x, y);
}
endShape();
pop();
}
function mouseClicked(){
if (!playing){
sound.loop();
playing = true;
} else {
sound.stop();
playing = false;
}
}
解决方案:
我不是 p5.js
专家,但我已经使用它足够多了,所以我认为必须有一种方法可以做到这一点,而无需整个 blob/文件读取过程。这些文档对于复杂的处理不是很有帮助,所以我在 p5.Sound
源代码中挖掘了一下,这就是我想出的:
// left channel
sound.setBuffer([sound.buffer.getChannelData(0)]);
// right channel
sound.setBuffer([sound.buffer.getChannelData(1)]);
Here's a working example - 单击 canvas 可在 L/stereo/R 音频播放和 FFT 视觉效果之间切换。
解释:
p5.SoundFile
有一个setBuffer
方法可以用来修改声音文件对象的音频内容。函数签名指定它接受一个缓冲区对象数组,如果该数组只有一个项目,它将产生一个单声道源——它已经采用正确的格式提供给 FFT!那么我们如何产生一个只包含一个通道数据的缓冲区呢?
在整个源代码中,有通过 sound.buffer.getChannelData()
. I was wary of accessing undocumented properties at first, but it turns out that since p5.Sound
uses the WebAudio API under the hood, this buffer
is really just plain old WebAudio AudioBuffer, and the getChannelData
method is well-documented.
上述方法的唯一缺点是 setBuffer
直接作用于 SoundFile
所以我要为每个要分离的通道再次加载文件,但我确定有解决方法。
分裂快乐!