在处理中获取多个音频输入
Getting Multiple Audio Inputs in Processing
我目前正在编写需要访问多个音频输入的 Processing sketch,但 Processing 只允许访问默认线路输入。我尝试直接从 Java 混音器获取线路(在 Processing 中访问) ), 但我仍然只能从我的机器上当前设置为默认值的那条线路获得信号。
我已经开始考虑按照建议从 SuperCollider 通过 OSC 发送声音 here. However, since I'm very new to SuperCollider and their documentation and support is more focused on generating sound than on accessing inputs, my next step will probably be to play around with Beads and Jack, as suggested here。
有没有人有 (1) 其他建议,或 (2) 从 SuperCollider 或 Beads/Jack 获取多个输入到 Processing 的具体示例?
提前致谢!
编辑:声音将用于支持自定义音乐可视化(想想 iTunes 可视化工具,但更具体的歌曲)。我们可以使用多个 mp3;现在我需要的是能够从每个麦克风获得一个 float[] 缓冲区。希望有 9 个不同的麦克风,但如果更可行的话,我们会满足于 4 个。
对于硬件,此时,我们只使用麦克风和 XLR 转 USB 电缆。 (考虑过前置放大器,但到目前为止这已经足够了。)我目前使用 Windows,但我认为我们最终会切换到 Mac。
这是我仅使用 Beads 的尝试(它在笔记本电脑上运行良好,因为我先做了那个,但耳机缓冲区全为 0;如果我切换它们并将耳机放在第一位,耳机缓冲区将是正确,但笔记本电脑将包含全 0):
void setup() {
size(512, 400);
JavaSoundAudioIO headsetAudioIO = new JavaSoundAudioIO();
JavaSoundAudioIO laptopAudioIO = new JavaSoundAudioIO();
headsetAudioIO.selectMixer(5);
headsetAudioCon = new AudioContext(headsetAudioIO);
laptopAudioIO.selectMixer(4);
laptopAudioCon = new AudioContext(laptopAudioIO);
headsetMic = headsetAudioCon.getAudioInput();
laptopMic = headsetAudioCon.getAudioInput();
} // setup()
void draw() {
background(100,0, 75);
laptopMic.start();
laptopMic.calculateBuffer();
laptopBuffer = laptopMic.getOutBuffer(0);
for (int j = 0; j < laptopBuffer.length - 1; j++)
{
println("laptop; " + j + ": " + laptopBuffer[j]);
line(j, 200+laptopBuffer[j]*50, j+1, 200+laptopBuffer[j+1]*50);
}
laptopMic.kill();
headsetMic.start();
headsetMic.calculateBuffer();
headsetBuffer = headsetMic.getOutBuffer(0);
for (int j = 0; j < headsetBuffer.length - 1; j++)
{
println("headset; " + j + ": " + headsetBuffer[j]);
line(j, 50+headsetBuffer[j]*50, j+1, 50+headsetBuffer[j+1]*50);
}
headsetMic.kill();
} // draw()
我尝试添加 Jack 时包含这一行:
ac = new AudioContext(new AudioServerIO.Jack(), 44100, new IOAudioFormat(44100, 16, 4, 4));
但我收到错误消息:
Jun 22, 2016 9:17:24 PM org.jaudiolibs.beads.AudioServerIO run
SEVERE: null
org.jaudiolibs.jnajack.JackException: Can't find native library
at org.jaudiolibs.jnajack.Jack.getInstance(Jack.java:428)
at org.jaudiolibs.audioservers.jack.JackAudioServer.initialise(JackAudioServer.java:102)
at org.jaudiolibs.audioservers.jack.JackAudioServer.run(JackAudioServer.java:86)
at org.jaudiolibs.beads.AudioServerIO.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'jack': Native library (win32-x86-64/jack.dll) not found in resource path ([file:/C:/Users/...etc...)
当我在 Jack 时,我看不到我的麦克风(这对我来说似乎是一个巨大的危险信号,尽管我对 Jack 是全新的)。这个 AudioContext 应该在 Jack 中显示为 Input 吗?或者反之亦然——先在那里找到我的麦克风,然后将它从 Jack 拿到 Processing?
(请原谅我的经验不足,再次感谢您!我对 Jack 的了解不足让我想知道我是否应该重新访问 SuperCollider...)
几年前我遇到过同样的问题,我结合使用了 JACK、JNAJack 和 Beads。您可以关注此 Beads Google Group thread 了解更多详情。
当时我不得不使用 this version of Beads (2012-04-23),但我希望这些更改现在可能已进入主项目。
作为参考,这里是我使用的基本class:
import java.util.Arrays;
import org.jaudiolibs.beads.AudioServerIO;
import net.beadsproject.beads.analysis.featureextractors.FFT;
import net.beadsproject.beads.analysis.featureextractors.PowerSpectrum;
import net.beadsproject.beads.analysis.segmenters.ShortFrameSegmenter;
import net.beadsproject.beads.core.AudioContext;
import net.beadsproject.beads.core.AudioIO;
import net.beadsproject.beads.core.UGen;
import net.beadsproject.beads.ugens.Gain;
import processing.core.PApplet;
public class BeadsJNA extends PApplet {
AudioContext ac;
ShortFrameSegmenter sfs;
PowerSpectrum ps;
public void setup(){
//defining audio context with 6 inputs and 6 outputs - adjust this based on your sound card / JACK setup
ac = new AudioContext(new AudioServerIO.Jack(),512,AudioContext.defaultAudioFormat(6,6));
//getting 4 audio inputs (channels 1,2,3,4)
UGen microphoneIn = ac.getAudioInput(new int[]{1,2,3,4});
Gain g = new Gain(ac, 1, 0.5f);
g.addInput(microphoneIn);
ac.out.addInput(g);
println("no. of inputs: " + ac.getAudioInput().getOuts());
//test get some FFT power spectrum data form the
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);
FFT fft = new FFT();
sfs.addListener(fft);
ps = new PowerSpectrum();
fft.addListener(ps);
ac.out.addDependent(sfs);
ac.start();
}
public void draw(){
background(255);
float[] features = ps.getFeatures();
if(features != null){
for(int x = 0; x < width; x++){
int featureIndex = (x * features.length) / width;
int barHeight = Math.min((int)(features[featureIndex] *
height), height - 1);
line(x, height, x, height - barHeight);
}
}
}
public static void main(String[] args) {
PApplet.main(BeadsJNA.class.getSimpleName());
}
}
我目前正在编写需要访问多个音频输入的 Processing sketch,但 Processing 只允许访问默认线路输入。我尝试直接从 Java 混音器获取线路(在 Processing 中访问) ), 但我仍然只能从我的机器上当前设置为默认值的那条线路获得信号。
我已经开始考虑按照建议从 SuperCollider 通过 OSC 发送声音 here. However, since I'm very new to SuperCollider and their documentation and support is more focused on generating sound than on accessing inputs, my next step will probably be to play around with Beads and Jack, as suggested here。
有没有人有 (1) 其他建议,或 (2) 从 SuperCollider 或 Beads/Jack 获取多个输入到 Processing 的具体示例?
提前致谢!
编辑:声音将用于支持自定义音乐可视化(想想 iTunes 可视化工具,但更具体的歌曲)。我们可以使用多个 mp3;现在我需要的是能够从每个麦克风获得一个 float[] 缓冲区。希望有 9 个不同的麦克风,但如果更可行的话,我们会满足于 4 个。
对于硬件,此时,我们只使用麦克风和 XLR 转 USB 电缆。 (考虑过前置放大器,但到目前为止这已经足够了。)我目前使用 Windows,但我认为我们最终会切换到 Mac。
这是我仅使用 Beads 的尝试(它在笔记本电脑上运行良好,因为我先做了那个,但耳机缓冲区全为 0;如果我切换它们并将耳机放在第一位,耳机缓冲区将是正确,但笔记本电脑将包含全 0):
void setup() {
size(512, 400);
JavaSoundAudioIO headsetAudioIO = new JavaSoundAudioIO();
JavaSoundAudioIO laptopAudioIO = new JavaSoundAudioIO();
headsetAudioIO.selectMixer(5);
headsetAudioCon = new AudioContext(headsetAudioIO);
laptopAudioIO.selectMixer(4);
laptopAudioCon = new AudioContext(laptopAudioIO);
headsetMic = headsetAudioCon.getAudioInput();
laptopMic = headsetAudioCon.getAudioInput();
} // setup()
void draw() {
background(100,0, 75);
laptopMic.start();
laptopMic.calculateBuffer();
laptopBuffer = laptopMic.getOutBuffer(0);
for (int j = 0; j < laptopBuffer.length - 1; j++)
{
println("laptop; " + j + ": " + laptopBuffer[j]);
line(j, 200+laptopBuffer[j]*50, j+1, 200+laptopBuffer[j+1]*50);
}
laptopMic.kill();
headsetMic.start();
headsetMic.calculateBuffer();
headsetBuffer = headsetMic.getOutBuffer(0);
for (int j = 0; j < headsetBuffer.length - 1; j++)
{
println("headset; " + j + ": " + headsetBuffer[j]);
line(j, 50+headsetBuffer[j]*50, j+1, 50+headsetBuffer[j+1]*50);
}
headsetMic.kill();
} // draw()
我尝试添加 Jack 时包含这一行:
ac = new AudioContext(new AudioServerIO.Jack(), 44100, new IOAudioFormat(44100, 16, 4, 4));
但我收到错误消息:
Jun 22, 2016 9:17:24 PM org.jaudiolibs.beads.AudioServerIO run
SEVERE: null
org.jaudiolibs.jnajack.JackException: Can't find native library
at org.jaudiolibs.jnajack.Jack.getInstance(Jack.java:428)
at org.jaudiolibs.audioservers.jack.JackAudioServer.initialise(JackAudioServer.java:102)
at org.jaudiolibs.audioservers.jack.JackAudioServer.run(JackAudioServer.java:86)
at org.jaudiolibs.beads.AudioServerIO.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'jack': Native library (win32-x86-64/jack.dll) not found in resource path ([file:/C:/Users/...etc...)
当我在 Jack 时,我看不到我的麦克风(这对我来说似乎是一个巨大的危险信号,尽管我对 Jack 是全新的)。这个 AudioContext 应该在 Jack 中显示为 Input 吗?或者反之亦然——先在那里找到我的麦克风,然后将它从 Jack 拿到 Processing?
(请原谅我的经验不足,再次感谢您!我对 Jack 的了解不足让我想知道我是否应该重新访问 SuperCollider...)
几年前我遇到过同样的问题,我结合使用了 JACK、JNAJack 和 Beads。您可以关注此 Beads Google Group thread 了解更多详情。
当时我不得不使用 this version of Beads (2012-04-23),但我希望这些更改现在可能已进入主项目。
作为参考,这里是我使用的基本class:
import java.util.Arrays;
import org.jaudiolibs.beads.AudioServerIO;
import net.beadsproject.beads.analysis.featureextractors.FFT;
import net.beadsproject.beads.analysis.featureextractors.PowerSpectrum;
import net.beadsproject.beads.analysis.segmenters.ShortFrameSegmenter;
import net.beadsproject.beads.core.AudioContext;
import net.beadsproject.beads.core.AudioIO;
import net.beadsproject.beads.core.UGen;
import net.beadsproject.beads.ugens.Gain;
import processing.core.PApplet;
public class BeadsJNA extends PApplet {
AudioContext ac;
ShortFrameSegmenter sfs;
PowerSpectrum ps;
public void setup(){
//defining audio context with 6 inputs and 6 outputs - adjust this based on your sound card / JACK setup
ac = new AudioContext(new AudioServerIO.Jack(),512,AudioContext.defaultAudioFormat(6,6));
//getting 4 audio inputs (channels 1,2,3,4)
UGen microphoneIn = ac.getAudioInput(new int[]{1,2,3,4});
Gain g = new Gain(ac, 1, 0.5f);
g.addInput(microphoneIn);
ac.out.addInput(g);
println("no. of inputs: " + ac.getAudioInput().getOuts());
//test get some FFT power spectrum data form the
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);
FFT fft = new FFT();
sfs.addListener(fft);
ps = new PowerSpectrum();
fft.addListener(ps);
ac.out.addDependent(sfs);
ac.start();
}
public void draw(){
background(255);
float[] features = ps.getFeatures();
if(features != null){
for(int x = 0; x < width; x++){
int featureIndex = (x * features.length) / width;
int barHeight = Math.min((int)(features[featureIndex] *
height), height - 1);
line(x, height, x, height - barHeight);
}
}
}
public static void main(String[] args) {
PApplet.main(BeadsJNA.class.getSimpleName());
}
}