如何将 .pcm 文件转换为 .wav 或 .mp3?
How to convert .pcm file to .wav or .mp3?
我目前正在开发一个 Android 具有录音和播放功能的应用程序。我是处理音频的新手,在编码和格式方面遇到了一些麻烦。
我可以在我的应用程序中录制和播放音频,但在导出时我无法重现音频。我找到的唯一方法是导出我的 .pcm 文件并使用 Audacity 进行转换。
这是我录制音频的代码是:
private Thread recordingThread
private AudioRecord mRecorder;
private boolean isRecording = false;
private void startRecording() {
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
Constants.RECORDER_SAMPLERATE, Constants.RECORDER_CHANNELS,
Constants.RECORDER_AUDIO_ENCODING, Constants.BufferElements2Rec * Constants.BytesPerElement);
mRecorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
public void run() {
writeAudioDataToFile();
}
}, "AudioRecorder Thread");
recordingThread.start();
}
private void writeAudioDataToFile() {
// Write the output audio in byte
FileOutputStream os = null;
try {
os = new FileOutputStream(mFileName);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
while (isRecording) {
// gets the voice output from microphone to byte format
mRecorder.read(sData, 0, Constants.BufferElements2Rec);
try {
// // writes the data to file from buffer
// // stores the voice buffer
byte bData[] = short2byte(sData);
os.write(bData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement);
} catch (IOException e) {
e.printStackTrace();
}
}
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
播放录制的音频,代码为:
private void startPlaying() {
new Thread(new Runnable() {
public void run() {
try {
File file = new File(mFileName);
byte[] audioData = null;
InputStream inputStream = new FileInputStream(mFileName);
audioData = new byte[Constants.BufferElements2Rec];
mPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, Constants.RECORDER_SAMPLERATE,
AudioFormat.CHANNEL_OUT_MONO, Constants.RECORDER_AUDIO_ENCODING,
Constants.BufferElements2Rec * Constants.BytesPerElement, AudioTrack.MODE_STREAM);
final float duration = (float) file.length() / Constants.RECORDER_SAMPLERATE / 2;
Log.i(TAG, "PLAYBACK AUDIO");
Log.i(TAG, String.valueOf(duration));
mPlayer.setPositionNotificationPeriod(Constants.RECORDER_SAMPLERATE / 10);
mPlayer.setNotificationMarkerPosition(Math.round(duration * Constants.RECORDER_SAMPLERATE));
mPlayer.play();
int i = 0;
while ((i = inputStream.read(audioData)) != -1) {
try {
mPlayer.write(audioData, 0, i);
} catch (Exception e) {
Log.e(TAG, "Exception: " + e.getLocalizedMessage());
}
}
} catch (FileNotFoundException fe) {
Log.e(TAG, "File not found: " + fe.getLocalizedMessage());
} catch (IOException io) {
Log.e(TAG, "IO Exception: " + io.getLocalizedMessage());
}
}
}).start();
}
常量class中定义的常量是:
public class Constants {
final static public int RECORDER_SAMPLERATE = 44100;
final static public int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
final static public int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
final static public int BufferElements2Rec = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
final static public int BytesPerElement = 2; // 2 bytes in 16bit format
}
如果我按原样导出文件,我会用 Audacity 转换它并播放。但是,我确实需要将其导出为可以自动播放的格式。
我已经看到实施 Lame 的答案,目前正在研究它。我还找到了使用以下方法转换它的答案:
private File rawToWave(final File rawFile, final String filePath) throws IOException {
File waveFile = new File(filePath);
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, Constants.RECORDER_SAMPLERATE); // sample rate
writeInt(output, Constants.RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(bytes.array());
} finally {
if (output != null) {
output.close();
}
}
return waveFile;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
但是这个,当导出时,播放时长正确但只是白噪音。
我尝试过但无法工作的一些答案:
- How to convert PCM raw data to mp3 file?
谁能指出什么是最佳解决方案?它真的实施 lame 还是可以以更直接的方式完成?如果是这样,为什么代码示例将文件转换为白噪声?
只是为了注册,我解决了使用 MediaRecorder 而不是 Audio Recorder 录制可在普通播放器中播放的音频的需求。
开始录制:
MediaRecorder mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mRecorder.setOutputFile(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/recording.3gp");
mRecorder.prepare();
mRecorder.start();
并播放录音:
mPlayer = new MediaPlayer();
mPlayer.setDataSource(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/recording.3gp");
mPlayer.prepare();
mPlayer.start();
您的大部分代码都是正确的。我能看到的唯一问题是将 PCM 数据写入 WAV 文件的部分。这应该很简单,因为 WAV = 元数据 + PCM(按此顺序)。这应该有效:
private void rawToWave(final File rawFile, final File waveFile) throws IOException {
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, 44100); // sample rate
writeInt(output, RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(fullyReadFileToBytes(rawFile));
} finally {
if (output != null) {
output.close();
}
}
}
byte[] fullyReadFileToBytes(File f) throws IOException {
int size = (int) f.length();
byte bytes[] = new byte[size];
byte tmpBuff[] = new byte[size];
FileInputStream fis= new FileInputStream(f);
try {
int read = fis.read(bytes, 0, size);
if (read < size) {
int remain = size - read;
while (remain > 0) {
read = fis.read(tmpBuff, 0, remain);
System.arraycopy(tmpBuff, 0, bytes, size - remain, read);
remain -= read;
}
}
} catch (IOException e){
throw e;
} finally {
fis.close();
}
return bytes;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
使用方法
使用起来非常简单。就这样称呼它:
File f1 = new File("/sdcard/44100Sampling-16bit-mono-mic.pcm"); // The location of your PCM file
File f2 = new File("/sdcard/44100Sampling-16bit-mono-mic.wav"); // The location where you want your WAV file
try {
rawToWave(f1, f2);
} catch (IOException e) {
e.printStackTrace();
}
这一切是如何运作的
如您所见,WAV header 是 WAV 和 PCM 文件格式之间的唯一区别。假设您正在录制 16 位 PCM MONO 音频(根据您的代码,您是)。 rawToWave 函数只是巧妙地将 headers 添加到 WAV 文件中,以便音乐播放器知道打开文件时会发生什么,然后在 headers 之后,它只是将 PCM 数据写入最后一点。
酷提示
如果你想改变你的声音的音调,或者制作一个语音转换器应用程序,你所要做的就是 increase/decrease 代码中 writeInt(output, 44100); // sample rate
的值。降低它会告诉玩家以不同的速度播放它,从而改变输出音调。只是一些额外的 'good to know' 东西。 :)
我知道已经晚了,您已经使用 MediaRecorder 完成了您的工作。但是想分享我的答案,因为我花了一些时间才找到它。 :)
当您录制音频时,数据是 read
与您的 AudioRecord
对象一样短,然后在存储到 .pcm
文件之前将其转换为字节。
现在,当您编写 .wav
文件时,您又在进行短转换。这不是必需的。因此,在您的代码中,如果您删除以下块并将 rawData
直接写入 .wav
文件的末尾。它会工作得很好。
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
检查删除重复代码块后您将获得的以下代码。
writeInt(output, rawData.length); // subchunk 2 size
// removed the duplicate short conversion
output.write(rawData);
我试过上面的录音代码writeAudioDataToFile()
。它完美地录制音频并将其转换为 .wav format
。但是当我播放录制的音频时,速度太快了。 5 秒音频在 2.5 秒内完成。然后我观察到这是因为这个short2byte()
函数。
对于那些有同样问题的人不应该使用 short2byte()
并直接在行 os.write(sData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement);
中写入 sData,其中 sData 应该是 byte[]
.
我目前正在开发一个 Android 具有录音和播放功能的应用程序。我是处理音频的新手,在编码和格式方面遇到了一些麻烦。
我可以在我的应用程序中录制和播放音频,但在导出时我无法重现音频。我找到的唯一方法是导出我的 .pcm 文件并使用 Audacity 进行转换。
这是我录制音频的代码是:
private Thread recordingThread
private AudioRecord mRecorder;
private boolean isRecording = false;
private void startRecording() {
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
Constants.RECORDER_SAMPLERATE, Constants.RECORDER_CHANNELS,
Constants.RECORDER_AUDIO_ENCODING, Constants.BufferElements2Rec * Constants.BytesPerElement);
mRecorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
public void run() {
writeAudioDataToFile();
}
}, "AudioRecorder Thread");
recordingThread.start();
}
private void writeAudioDataToFile() {
// Write the output audio in byte
FileOutputStream os = null;
try {
os = new FileOutputStream(mFileName);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
while (isRecording) {
// gets the voice output from microphone to byte format
mRecorder.read(sData, 0, Constants.BufferElements2Rec);
try {
// // writes the data to file from buffer
// // stores the voice buffer
byte bData[] = short2byte(sData);
os.write(bData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement);
} catch (IOException e) {
e.printStackTrace();
}
}
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
播放录制的音频,代码为:
private void startPlaying() {
new Thread(new Runnable() {
public void run() {
try {
File file = new File(mFileName);
byte[] audioData = null;
InputStream inputStream = new FileInputStream(mFileName);
audioData = new byte[Constants.BufferElements2Rec];
mPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, Constants.RECORDER_SAMPLERATE,
AudioFormat.CHANNEL_OUT_MONO, Constants.RECORDER_AUDIO_ENCODING,
Constants.BufferElements2Rec * Constants.BytesPerElement, AudioTrack.MODE_STREAM);
final float duration = (float) file.length() / Constants.RECORDER_SAMPLERATE / 2;
Log.i(TAG, "PLAYBACK AUDIO");
Log.i(TAG, String.valueOf(duration));
mPlayer.setPositionNotificationPeriod(Constants.RECORDER_SAMPLERATE / 10);
mPlayer.setNotificationMarkerPosition(Math.round(duration * Constants.RECORDER_SAMPLERATE));
mPlayer.play();
int i = 0;
while ((i = inputStream.read(audioData)) != -1) {
try {
mPlayer.write(audioData, 0, i);
} catch (Exception e) {
Log.e(TAG, "Exception: " + e.getLocalizedMessage());
}
}
} catch (FileNotFoundException fe) {
Log.e(TAG, "File not found: " + fe.getLocalizedMessage());
} catch (IOException io) {
Log.e(TAG, "IO Exception: " + io.getLocalizedMessage());
}
}
}).start();
}
常量class中定义的常量是:
public class Constants {
final static public int RECORDER_SAMPLERATE = 44100;
final static public int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
final static public int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
final static public int BufferElements2Rec = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
final static public int BytesPerElement = 2; // 2 bytes in 16bit format
}
如果我按原样导出文件,我会用 Audacity 转换它并播放。但是,我确实需要将其导出为可以自动播放的格式。
我已经看到实施 Lame 的答案,目前正在研究它。我还找到了使用以下方法转换它的答案:
private File rawToWave(final File rawFile, final String filePath) throws IOException {
File waveFile = new File(filePath);
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, Constants.RECORDER_SAMPLERATE); // sample rate
writeInt(output, Constants.RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(bytes.array());
} finally {
if (output != null) {
output.close();
}
}
return waveFile;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
但是这个,当导出时,播放时长正确但只是白噪音。
我尝试过但无法工作的一些答案:
- How to convert PCM raw data to mp3 file?
谁能指出什么是最佳解决方案?它真的实施 lame 还是可以以更直接的方式完成?如果是这样,为什么代码示例将文件转换为白噪声?
只是为了注册,我解决了使用 MediaRecorder 而不是 Audio Recorder 录制可在普通播放器中播放的音频的需求。
开始录制:
MediaRecorder mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mRecorder.setOutputFile(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/recording.3gp");
mRecorder.prepare();
mRecorder.start();
并播放录音:
mPlayer = new MediaPlayer();
mPlayer.setDataSource(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/recording.3gp");
mPlayer.prepare();
mPlayer.start();
您的大部分代码都是正确的。我能看到的唯一问题是将 PCM 数据写入 WAV 文件的部分。这应该很简单,因为 WAV = 元数据 + PCM(按此顺序)。这应该有效:
private void rawToWave(final File rawFile, final File waveFile) throws IOException {
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 1); // number of channels
writeInt(output, 44100); // sample rate
writeInt(output, RECORDER_SAMPLERATE * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(fullyReadFileToBytes(rawFile));
} finally {
if (output != null) {
output.close();
}
}
}
byte[] fullyReadFileToBytes(File f) throws IOException {
int size = (int) f.length();
byte bytes[] = new byte[size];
byte tmpBuff[] = new byte[size];
FileInputStream fis= new FileInputStream(f);
try {
int read = fis.read(bytes, 0, size);
if (read < size) {
int remain = size - read;
while (remain > 0) {
read = fis.read(tmpBuff, 0, remain);
System.arraycopy(tmpBuff, 0, bytes, size - remain, read);
remain -= read;
}
}
} catch (IOException e){
throw e;
} finally {
fis.close();
}
return bytes;
}
private void writeInt(final DataOutputStream output, final int value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
output.write(value >> 16);
output.write(value >> 24);
}
private void writeShort(final DataOutputStream output, final short value) throws IOException {
output.write(value >> 0);
output.write(value >> 8);
}
private void writeString(final DataOutputStream output, final String value) throws IOException {
for (int i = 0; i < value.length(); i++) {
output.write(value.charAt(i));
}
}
使用方法
使用起来非常简单。就这样称呼它:
File f1 = new File("/sdcard/44100Sampling-16bit-mono-mic.pcm"); // The location of your PCM file
File f2 = new File("/sdcard/44100Sampling-16bit-mono-mic.wav"); // The location where you want your WAV file
try {
rawToWave(f1, f2);
} catch (IOException e) {
e.printStackTrace();
}
这一切是如何运作的
如您所见,WAV header 是 WAV 和 PCM 文件格式之间的唯一区别。假设您正在录制 16 位 PCM MONO 音频(根据您的代码,您是)。 rawToWave 函数只是巧妙地将 headers 添加到 WAV 文件中,以便音乐播放器知道打开文件时会发生什么,然后在 headers 之后,它只是将 PCM 数据写入最后一点。
酷提示
如果你想改变你的声音的音调,或者制作一个语音转换器应用程序,你所要做的就是 increase/decrease 代码中 writeInt(output, 44100); // sample rate
的值。降低它会告诉玩家以不同的速度播放它,从而改变输出音调。只是一些额外的 'good to know' 东西。 :)
我知道已经晚了,您已经使用 MediaRecorder 完成了您的工作。但是想分享我的答案,因为我花了一些时间才找到它。 :)
当您录制音频时,数据是 read
与您的 AudioRecord
对象一样短,然后在存储到 .pcm
文件之前将其转换为字节。
现在,当您编写 .wav
文件时,您又在进行短转换。这不是必需的。因此,在您的代码中,如果您删除以下块并将 rawData
直接写入 .wav
文件的末尾。它会工作得很好。
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
检查删除重复代码块后您将获得的以下代码。
writeInt(output, rawData.length); // subchunk 2 size
// removed the duplicate short conversion
output.write(rawData);
我试过上面的录音代码writeAudioDataToFile()
。它完美地录制音频并将其转换为 .wav format
。但是当我播放录制的音频时,速度太快了。 5 秒音频在 2.5 秒内完成。然后我观察到这是因为这个short2byte()
函数。
对于那些有同样问题的人不应该使用 short2byte()
并直接在行 os.write(sData, 0, Constants.BufferElements2Rec * Constants.BytesPerElement);
中写入 sData,其中 sData 应该是 byte[]
.