录制音频、同步到循环、偏移延迟和导出部分
Record audio, sync to loop, offset latency and export portion
我正在构建一个网络应用程序,它允许用户收听一段器乐循环,然后在上面录制人声。这一切都可以使用 Recorder.js 但是有一些问题:
- 录音有延迟,需要用户在按下录音前设置。
- 导出的循环长度并不总是相同,因为采样率可能与所需时间不完全匹配
然而从那以后我又回到了绘图板并问道:什么对用户最好?。这给了我一组新的要求:
- 背景循环在后台连续播放
- 每当用户选择时开始和停止录制
- 录音然后与循环同步播放(循环之间的停滞时间自动填充空白音频)
- 用户可以滑动偏移滑块来调整延迟的小时间问题
- 用户可以select保存录音的哪一部分(与原始支持循环的长度相同)
这是一张示意图:
到目前为止我的逻辑:
// backing loop
a.startTime = 5
a.duration = 10
a.loop = true
// recording
b.startTime = 22.5
b.duration = 15
b.loop = false
// fill blank space + loop
fill = a.duration - (b.duration % a.duration) // 5
c = b.buffers + (fill * blankBuffers)
c.startTime = (context.currentTime - a.startTime) % a.duration
c.duration = 20
c.loop = true
// user corrects timing offset
c.startTime = ((context.currentTime - a.startTime) % a.duration) - offset
// user choose favourite loop
? this is where I start to lose the plot!
这里是一个截断从 Recorder.js:
发送的缓冲区的例子
// shorten the length of buffers
start = context.sampleRate * 2; // start at 2 seconds
end = context.sampleRate * 3; // end at 3 seconds
buffers.push(buffers.subarray(start, end));
以及我一直在处理的先前版本中的更多示例代码:
https://github.com/mattdiamond/Recorderjs/issues/105
如果您能帮助解决如何为导出循环分割缓冲区或改进此逻辑,我们将不胜感激!
更新
通过这个例子,我找到了如何在录音中插入空白 space:
http://mdn.github.io/audio-buffer/
我现在几乎已经成功地复制了我需要的功能,但是白噪声似乎消失了。是不是哪里算错了?
我通过编写以下逻辑设法解决了这个问题
diff = track2.startTime - track1.startTime
before = Math.round((diff % track1.duration) * 44100)
after = Math.round((track1.duration - ((diff + track2.duration) % track1.duration)) * 44100)
newAudio = [before data] + [recording data] + [after data]
在 javascript 代码中,它看起来像这样:
var i = 0,
channel = 0,
channelTotal = 2,
num = 0,
vocalsRecording = this.createBuffer(vocalsBuffers, channelTotal),
diff = this.recorder.startTime - backingInstance.startTime + (offset / 1000),
before = Math.round((diff % backingInstance.buffer.duration) * this.context.sampleRate),
after = Math.round((backingInstance.buffer.duration - ((diff + vocalsRecording.duration) % backingInstance.buffer.duration)) * this.context.sampleRate),
audioBuffer = this.context.createBuffer(channelTotal, before + vocalsBuffers[0].length + after, this.context.sampleRate),
buffer = null;
// loop through the audio left, right channels
for (channel = 0; channel < channelTotal; channel += 1) {
buffer = audioBuffer.getChannelData(channel);
// fill the empty space before the recording
for (i = 0; i < before; i += 1) {
buffer[num] = 0;
num += 1;
}
// add the recording data
for (i = 0; i < vocalsBuffers[channel].length; i += 1) {
buffer[num] = vocalsBuffers[channel][i];
num += 1;
}
// fill the empty space at the end of the recording
for (i = 0; i < after; i += 1) {
buffer[num] = 0;
num += 1;
}
}
// now return the new audio which should be the exact same length
return audioBuffer;
您可以在此处查看完整的工作示例:
我正在构建一个网络应用程序,它允许用户收听一段器乐循环,然后在上面录制人声。这一切都可以使用 Recorder.js 但是有一些问题:
- 录音有延迟,需要用户在按下录音前设置。
- 导出的循环长度并不总是相同,因为采样率可能与所需时间不完全匹配
然而从那以后我又回到了绘图板并问道:什么对用户最好?。这给了我一组新的要求:
- 背景循环在后台连续播放
- 每当用户选择时开始和停止录制
- 录音然后与循环同步播放(循环之间的停滞时间自动填充空白音频)
- 用户可以滑动偏移滑块来调整延迟的小时间问题
- 用户可以select保存录音的哪一部分(与原始支持循环的长度相同)
这是一张示意图:
到目前为止我的逻辑:
// backing loop
a.startTime = 5
a.duration = 10
a.loop = true
// recording
b.startTime = 22.5
b.duration = 15
b.loop = false
// fill blank space + loop
fill = a.duration - (b.duration % a.duration) // 5
c = b.buffers + (fill * blankBuffers)
c.startTime = (context.currentTime - a.startTime) % a.duration
c.duration = 20
c.loop = true
// user corrects timing offset
c.startTime = ((context.currentTime - a.startTime) % a.duration) - offset
// user choose favourite loop
? this is where I start to lose the plot!
这里是一个截断从 Recorder.js:
发送的缓冲区的例子// shorten the length of buffers
start = context.sampleRate * 2; // start at 2 seconds
end = context.sampleRate * 3; // end at 3 seconds
buffers.push(buffers.subarray(start, end));
以及我一直在处理的先前版本中的更多示例代码: https://github.com/mattdiamond/Recorderjs/issues/105
如果您能帮助解决如何为导出循环分割缓冲区或改进此逻辑,我们将不胜感激!
更新
通过这个例子,我找到了如何在录音中插入空白 space:
http://mdn.github.io/audio-buffer/
我现在几乎已经成功地复制了我需要的功能,但是白噪声似乎消失了。是不是哪里算错了?
我通过编写以下逻辑设法解决了这个问题
diff = track2.startTime - track1.startTime
before = Math.round((diff % track1.duration) * 44100)
after = Math.round((track1.duration - ((diff + track2.duration) % track1.duration)) * 44100)
newAudio = [before data] + [recording data] + [after data]
在 javascript 代码中,它看起来像这样:
var i = 0,
channel = 0,
channelTotal = 2,
num = 0,
vocalsRecording = this.createBuffer(vocalsBuffers, channelTotal),
diff = this.recorder.startTime - backingInstance.startTime + (offset / 1000),
before = Math.round((diff % backingInstance.buffer.duration) * this.context.sampleRate),
after = Math.round((backingInstance.buffer.duration - ((diff + vocalsRecording.duration) % backingInstance.buffer.duration)) * this.context.sampleRate),
audioBuffer = this.context.createBuffer(channelTotal, before + vocalsBuffers[0].length + after, this.context.sampleRate),
buffer = null;
// loop through the audio left, right channels
for (channel = 0; channel < channelTotal; channel += 1) {
buffer = audioBuffer.getChannelData(channel);
// fill the empty space before the recording
for (i = 0; i < before; i += 1) {
buffer[num] = 0;
num += 1;
}
// add the recording data
for (i = 0; i < vocalsBuffers[channel].length; i += 1) {
buffer[num] = vocalsBuffers[channel][i];
num += 1;
}
// fill the empty space at the end of the recording
for (i = 0; i < after; i += 1) {
buffer[num] = 0;
num += 1;
}
}
// now return the new audio which should be the exact same length
return audioBuffer;
您可以在此处查看完整的工作示例: