AudioTrack.write() 上缓冲区中的数据发生了什么
What happens to data in buffer on AudioTrack.write()
这是我在 android studio 中用来生成连续正弦波的一些代码。整个事情在一个线程中运行。我的问题是:当我调用 audio.write() 时,可能仍在缓冲区中的任何数据会发生什么情况?它是转储旧样本并写入新样本集,还是将新样本数组附加到剩余样本?
int buffSize = AudioTrack.getMinBufferSize(sr, AudioFormat.CHANNEL_OUT_MONO,AudioFormat.ENCODING_PCM_16BIT);
//create the AudioTrack object
AudioTrack audio = new AudioTrack( AudioManager.STREAM_MUSIC,
sr,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffSize,
AudioTrack.MODE_STREAM);
//initialise values for synthesis
short samples[]= new short[buffSize]; //array the same size as buffer
int amp=10000; //amplitude of the waveform
double twopi = 8.*Math.tan(1.); //2*pi
double fr = 440; //the frequency to create
double ph = 0; //phase shift
//start audio
audio.play();
//synthesis loop
while (isRunning)
{
fr=440+4.4*sliderVal;
for (int i=0;i<buffSize;i++)
{
samples[i]=(short)(amp*Math.sin(ph));
ph+=twopi*fr/sr;
}
audio.write(samples,0,buffSize);
}
//stop the audio track
audio.stop();
audio.release();
您根据设备正确设置缓冲区大小capability-that对于最小化延迟非常重要。
然后您正在构建缓冲区并将它们分块到硬件,以便可以听到它们。每句话都没有 "in there"。建立缓冲区,然后每次在 track.write.
写入整个缓冲区
下面是我的 generateTone 例程,与您的例程非常相似。以 Hz 为单位的频率和以 ms 为单位的持续时间这样调用:
AudioTrack sound = generateTone(440, 250);
和 generateTone class:
private AudioTrack generateTone(double freqHz, int durationMs) {
int count = (int)(44100.0 * 2.0 * (durationMs / 1000.0)) & ~1;
short[] samples = new short[count];
for(int i = 0; i < count; i += 2){
short sample = (short)(Math.sin(2 * Math.PI * i / (44100.0 / freqHz)) * 0x7FFF);
samples[i + 0] = sample;
samples[i + 1] = sample;
}
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
count * (Short.SIZE / 8), AudioTrack.MODE_STATIC);
track.write(samples, 0, count);
return track;
}
AudioTrack 很酷,因为如果算法正确,您可以创建任何类型的声音。尽管在 Android.
上,Puredata 和 Csound 使它变得更容易
(我在我的书Android软件开发-实战项目集里写了一大篇关于Audio的文章。)
这是我在 android studio 中用来生成连续正弦波的一些代码。整个事情在一个线程中运行。我的问题是:当我调用 audio.write() 时,可能仍在缓冲区中的任何数据会发生什么情况?它是转储旧样本并写入新样本集,还是将新样本数组附加到剩余样本?
int buffSize = AudioTrack.getMinBufferSize(sr, AudioFormat.CHANNEL_OUT_MONO,AudioFormat.ENCODING_PCM_16BIT);
//create the AudioTrack object
AudioTrack audio = new AudioTrack( AudioManager.STREAM_MUSIC,
sr,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffSize,
AudioTrack.MODE_STREAM);
//initialise values for synthesis
short samples[]= new short[buffSize]; //array the same size as buffer
int amp=10000; //amplitude of the waveform
double twopi = 8.*Math.tan(1.); //2*pi
double fr = 440; //the frequency to create
double ph = 0; //phase shift
//start audio
audio.play();
//synthesis loop
while (isRunning)
{
fr=440+4.4*sliderVal;
for (int i=0;i<buffSize;i++)
{
samples[i]=(short)(amp*Math.sin(ph));
ph+=twopi*fr/sr;
}
audio.write(samples,0,buffSize);
}
//stop the audio track
audio.stop();
audio.release();
您根据设备正确设置缓冲区大小capability-that对于最小化延迟非常重要。
然后您正在构建缓冲区并将它们分块到硬件,以便可以听到它们。每句话都没有 "in there"。建立缓冲区,然后每次在 track.write.
写入整个缓冲区下面是我的 generateTone 例程,与您的例程非常相似。以 Hz 为单位的频率和以 ms 为单位的持续时间这样调用:
AudioTrack sound = generateTone(440, 250);
和 generateTone class:
private AudioTrack generateTone(double freqHz, int durationMs) {
int count = (int)(44100.0 * 2.0 * (durationMs / 1000.0)) & ~1;
short[] samples = new short[count];
for(int i = 0; i < count; i += 2){
short sample = (short)(Math.sin(2 * Math.PI * i / (44100.0 / freqHz)) * 0x7FFF);
samples[i + 0] = sample;
samples[i + 1] = sample;
}
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
count * (Short.SIZE / 8), AudioTrack.MODE_STATIC);
track.write(samples, 0, count);
return track;
}
AudioTrack 很酷,因为如果算法正确,您可以创建任何类型的声音。尽管在 Android.
上,Puredata 和 Csound 使它变得更容易(我在我的书Android软件开发-实战项目集里写了一大篇关于Audio的文章。)