使用 AnalyserNode 和 ChannelSplitter 获取 L/R 数据
Getting L/R data with AnalyserNode and ChannelSplitter
我整天都被困在这个问题上。尝试从 getUserMedia 中拆分源并分别可视化左右通道。无论我做什么,每个可视化工具都停留在单声道。我使用的来源是立体声(如果我在 windows 中收听它,它显然是立体声)。复制所需的最低要求。
navigator.getUserMedia({audio: true}, analyse, function(e) {
alert('Error getting audio');
console.log(e);
});
}
function analyse(stream){
window.stream = stream;
var input = audioContext.createMediaStreamSource(stream);
splitter = audioContext.createChannelSplitter(2),
lAnalyser = audioContext.createAnalyser(),
rAnalyser = audioContext.createAnalyser();
input.connect(splitter);
splitter.connect(lAnalyser, 0, 0);
splitter.connect(rAnalyser, 1, 0);
var lArray = new Uint8Array(lAnalyser.frequencyBinCount),
rArray = new Uint8Array(rAnalyser.frequencyBinCount);
updateAnalyser()
function updateAnalyser(){
requestAnimationFrame(updateAnalyser);
lAnalyser.getByteFrequencyData(lArray);
rAnalyser.getByteFrequencyData(rArray);
}
}
lArray 和 rArray 将是相同的,即使我将左声道或右声道静音也是如此。难道我做错了什么?我也试过 input->splitter->leftmerger/rightmerger->leftanalyser/rightanalyser.
http://www.smartjava.org/content/exploring-html5-web-audio-visualizing-sound
是我能找到的最相似的东西,但它不使用用户输入并处理音频缓冲区。
根据https://code.google.com/p/chromium/issues/detail?id=387737
The behaviour is expected. In M37, we moved the audio processing from peer connection to getUserMedia, and the audio processing is turned on by default if you do no specify "echoCancellation : false" in the getUserMedia constraints, since the audio processing only support mono, we have to down sample the audio to mono before passing the data for processing.
If you want to avoid the down sampling, passing a constraint to getUserMedia, for example:
var constraints = {audio: { mandatory: { echoCancellation : false, googAudioMirroring: true } }};
getUserMedia(constraints, gotStream, gotStreamFailed);
将约束设置为 {audio: { mandatory: { echoCancellation: false}}
停止输入缩混。
我整天都被困在这个问题上。尝试从 getUserMedia 中拆分源并分别可视化左右通道。无论我做什么,每个可视化工具都停留在单声道。我使用的来源是立体声(如果我在 windows 中收听它,它显然是立体声)。复制所需的最低要求。
navigator.getUserMedia({audio: true}, analyse, function(e) {
alert('Error getting audio');
console.log(e);
});
}
function analyse(stream){
window.stream = stream;
var input = audioContext.createMediaStreamSource(stream);
splitter = audioContext.createChannelSplitter(2),
lAnalyser = audioContext.createAnalyser(),
rAnalyser = audioContext.createAnalyser();
input.connect(splitter);
splitter.connect(lAnalyser, 0, 0);
splitter.connect(rAnalyser, 1, 0);
var lArray = new Uint8Array(lAnalyser.frequencyBinCount),
rArray = new Uint8Array(rAnalyser.frequencyBinCount);
updateAnalyser()
function updateAnalyser(){
requestAnimationFrame(updateAnalyser);
lAnalyser.getByteFrequencyData(lArray);
rAnalyser.getByteFrequencyData(rArray);
}
}
lArray 和 rArray 将是相同的,即使我将左声道或右声道静音也是如此。难道我做错了什么?我也试过 input->splitter->leftmerger/rightmerger->leftanalyser/rightanalyser.
http://www.smartjava.org/content/exploring-html5-web-audio-visualizing-sound 是我能找到的最相似的东西,但它不使用用户输入并处理音频缓冲区。
根据https://code.google.com/p/chromium/issues/detail?id=387737
The behaviour is expected. In M37, we moved the audio processing from peer connection to getUserMedia, and the audio processing is turned on by default if you do no specify "echoCancellation : false" in the getUserMedia constraints, since the audio processing only support mono, we have to down sample the audio to mono before passing the data for processing.
If you want to avoid the down sampling, passing a constraint to getUserMedia, for example: var constraints = {audio: { mandatory: { echoCancellation : false, googAudioMirroring: true } }}; getUserMedia(constraints, gotStream, gotStreamFailed);
将约束设置为 {audio: { mandatory: { echoCancellation: false}}
停止输入缩混。