如何创建贝塞尔曲线而不是多点之间的直线?
How can I create bezier like curves instead of lines between multiple points?
这对我来说是一个比较复杂的任务,我不能完全用标题来概括。
但问题是这样的:
- 我创建了一个音频可视化工具,可将原始音频数据转换为
Vec<f32>
,其中向量中的元素按从 0hz 开始到 20_000hz 的升序频率排序
- 但现在我必须
normalize
向量,以便频率不是以线性方式而是以对数方式间隔,这更像是人类听觉的工作方式。这是执行此操作的函数:
fn normalize(buffer: Vec<f32>, volume: f32) -> Vec<f32> {
let mut output_buffer: Vec<f32> = vec![0.0; buffer.len()];
let mut start_pos: usize = 0;
let mut end_pos: usize = 0;
for i in 0..buffer.len() {
// FIRST HALF
let offset: f32 = (buffer.len() as f32 / (i + 1) as f32).sqrt();
if ((i as f32 * offset) as usize) < output_buffer.len() {
// normalized position
let pos: usize = (i as f32 * offset) as usize;
// stores positions needed for filling
start_pos = end_pos;
end_pos = pos;
let y = buffer[i];
// prevent volume loss, that could occur because of 'crunching' of higher freqs
// by only setting the value of buffer if y is bigger
if output_buffer[pos] < y {
output_buffer[pos] = y;
}
}
// SECOND HALF
// linear filling of the values between
if end_pos - start_pos > 1 && (end_pos - 1) < output_buffer.len() {
for s_p in (start_pos + 1)..end_pos {
let percentage: f32 = (s_p - start_pos) as f32 / ((end_pos - 1) - start_pos) as f32;
let mut y: f32 = 0.0;
//(output_buffer[s_p] * (1.0 - percentage) ) + (output_buffer[end_pos] * percentage);
y += output_buffer[start_pos] * (1.0 - percentage);
y += output_buffer[end_pos] * percentage;
output_buffer[s_p] = y;
}
}
}
output_buffer
}
在前半部分,我将缓冲区的值重新分配为对数,但是使用这种方法会跳过很多值,尤其是在低频范围内,然后看起来像这样:unfilled
|
| |
| |
| | | |
| | | |||
| | | | |||
+----+---+--+-+++
因此,我找到了一种方法来填补下半场的空白。
现在它看起来像这样:filled
|
:|: |
::|:: :|:
:::|::: ::|:| |
::::|:::|::|:|||
|::::|:::|::|:|||
+----+---+--+-+++
为了可视化,我减少了条的数量,实际实现大约多了 10 倍'bars',所以那里的线性更明显。
所以我的最后一个问题是,我想创建曲线而不是点之间的直线,这样可以更好地代表声音。
我需要能够访问曲线任意点的 'y' 坐标值。
有什么办法可以做到这一点,还是我这样做完全错了?
我创建了 audioviz that does all of this processing and where the code is from and audiolizer 一个应用程序,将这个库与 GUI 结合使用。
Splines 确实解决了我的确切问题。
这是我的实现,增加了分辨率控制和音量标准化,可能不是必需的:
use splines::{Interpolation, Key, Spline};
fn normalize(buffer: Vec<f32>, volume: f32, resolution: f32) -> Vec<f32> {
let mut output_buffer: Vec<f32> = vec![0.0; (buffer.len() as f32 * resolution ) as usize ];
let mut pos_index: Vec<(usize, f32)> = Vec::new();
for i in 0..buffer.len() {
let offset: f32 = (output_buffer.len() as f32 / (i + 1) as f32 * resolution).sqrt();
if ((i as f32 * offset) as usize) < output_buffer.len() {
// space normalisation
let pos: usize = (i as f32 * offset) as usize;
// volume normalisation
let volume_offset: f32 = (output_buffer.len() as f32 / (pos + 1) as f32).sqrt();
let y = buffer[i] / volume_offset.powi(3) * 0.01;
pos_index.push( ((pos as f32) as usize, y) );
}
}
// Interpolation
let mut points: Vec<Key<f32, f32>> = Vec::new();
for val in pos_index.iter() {
let x = val.0 as f32;
let y = val.1 * volume;
points.push(Key::new(x, y, Interpolation::Bezier(0.5)));
}
let spline = Spline::from_vec(points);
for i in 0..output_buffer.len() {
let v = match spline.sample(i as f32) {
Some(v) => v,
None => 0.0,
};
output_buffer[i] = v;
}
output_buffer
}
这对我来说是一个比较复杂的任务,我不能完全用标题来概括。
但问题是这样的:
- 我创建了一个音频可视化工具,可将原始音频数据转换为
Vec<f32>
,其中向量中的元素按从 0hz 开始到 20_000hz 的升序频率排序
- 但现在我必须
normalize
向量,以便频率不是以线性方式而是以对数方式间隔,这更像是人类听觉的工作方式。这是执行此操作的函数:
fn normalize(buffer: Vec<f32>, volume: f32) -> Vec<f32> {
let mut output_buffer: Vec<f32> = vec![0.0; buffer.len()];
let mut start_pos: usize = 0;
let mut end_pos: usize = 0;
for i in 0..buffer.len() {
// FIRST HALF
let offset: f32 = (buffer.len() as f32 / (i + 1) as f32).sqrt();
if ((i as f32 * offset) as usize) < output_buffer.len() {
// normalized position
let pos: usize = (i as f32 * offset) as usize;
// stores positions needed for filling
start_pos = end_pos;
end_pos = pos;
let y = buffer[i];
// prevent volume loss, that could occur because of 'crunching' of higher freqs
// by only setting the value of buffer if y is bigger
if output_buffer[pos] < y {
output_buffer[pos] = y;
}
}
// SECOND HALF
// linear filling of the values between
if end_pos - start_pos > 1 && (end_pos - 1) < output_buffer.len() {
for s_p in (start_pos + 1)..end_pos {
let percentage: f32 = (s_p - start_pos) as f32 / ((end_pos - 1) - start_pos) as f32;
let mut y: f32 = 0.0;
//(output_buffer[s_p] * (1.0 - percentage) ) + (output_buffer[end_pos] * percentage);
y += output_buffer[start_pos] * (1.0 - percentage);
y += output_buffer[end_pos] * percentage;
output_buffer[s_p] = y;
}
}
}
output_buffer
}
在前半部分,我将缓冲区的值重新分配为对数,但是使用这种方法会跳过很多值,尤其是在低频范围内,然后看起来像这样:unfilled
|
| |
| |
| | | |
| | | |||
| | | | |||
+----+---+--+-+++
因此,我找到了一种方法来填补下半场的空白。 现在它看起来像这样:filled
|
:|: |
::|:: :|:
:::|::: ::|:| |
::::|:::|::|:|||
|::::|:::|::|:|||
+----+---+--+-+++
为了可视化,我减少了条的数量,实际实现大约多了 10 倍'bars',所以那里的线性更明显。
所以我的最后一个问题是,我想创建曲线而不是点之间的直线,这样可以更好地代表声音。
我需要能够访问曲线任意点的 'y' 坐标值。
有什么办法可以做到这一点,还是我这样做完全错了?
我创建了 audioviz that does all of this processing and where the code is from and audiolizer 一个应用程序,将这个库与 GUI 结合使用。
Splines 确实解决了我的确切问题。 这是我的实现,增加了分辨率控制和音量标准化,可能不是必需的:
use splines::{Interpolation, Key, Spline};
fn normalize(buffer: Vec<f32>, volume: f32, resolution: f32) -> Vec<f32> {
let mut output_buffer: Vec<f32> = vec![0.0; (buffer.len() as f32 * resolution ) as usize ];
let mut pos_index: Vec<(usize, f32)> = Vec::new();
for i in 0..buffer.len() {
let offset: f32 = (output_buffer.len() as f32 / (i + 1) as f32 * resolution).sqrt();
if ((i as f32 * offset) as usize) < output_buffer.len() {
// space normalisation
let pos: usize = (i as f32 * offset) as usize;
// volume normalisation
let volume_offset: f32 = (output_buffer.len() as f32 / (pos + 1) as f32).sqrt();
let y = buffer[i] / volume_offset.powi(3) * 0.01;
pos_index.push( ((pos as f32) as usize, y) );
}
}
// Interpolation
let mut points: Vec<Key<f32, f32>> = Vec::new();
for val in pos_index.iter() {
let x = val.0 as f32;
let y = val.1 * volume;
points.push(Key::new(x, y, Interpolation::Bezier(0.5)));
}
let spline = Spline::from_vec(points);
for i in 0..output_buffer.len() {
let v = match spline.sample(i as f32) {
Some(v) => v,
None => 0.0,
};
output_buffer[i] = v;
}
output_buffer
}