使用 IBM Watson Android sdk 将音频文件写入 websocket
Writing an audio file to websocket using IBM Watson Android sdk
我在 IBM 开发者论坛上浏览了 post 之后才意识到,android sdk 从麦克风录音中读取字节并将它们写入 websocket。我现在正在尝试从内存中的音频文件中读取字节并将它们写入 websocket。我应该怎么做?到目前为止我有:
public class AudioCaptureThread extends Thread{
private static final String TAG = "AudioCaptureThread";
private boolean mStop = false;
private boolean mStopped = false;
private int mSamplingRate = -1;
private IAudioConsumer mIAudioConsumer = null;
// the thread receives high priority because it needs to do real time audio capture
// THREAD_PRIORITY_URGENT_AUDIO = "Standard priority of the most important audio threads"
public AudioCaptureThread(int iSamplingRate, IAudioConsumer IAudioConsumer) {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
mSamplingRate = iSamplingRate;
mIAudioConsumer = IAudioConsumer;
}
// once the thread is started it runs nonstop until it is stopped from the outside
@Override
public void run() {
File path = Activity.getContext.getExternalFilesDir(null);
File file = new File (path, "whatstheweatherlike.wav");
int length = (int) file.length();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] b = new byte[length];
FileInputStream in = null;
try {
in = new FileInputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
try {
for (int readNum; (readNum = in.read(b)) != -1;) {
bos.write(b, 0, readNum);
}
} catch (IOException e) {
e.printStackTrace();
}
byte[] bytes = bos.toByteArray();
mIAudioConsumer.consume(bytes);
}
但是,Activity.getContext无法识别。我可以在 MainActivity 中将文件转换为字节,但如何将其写入 websocket?我走在正确的轨道上还是这不是正确的方法?如果是,我该如何解决这个问题?
感谢任何帮助!
Activity.getContext
未被识别,因为没有对 Activity
的引用,因为它只是一个 Thread
。您必须传入 Activity
,但如果需要,传入 Context
可能更有意义。
您的想法是正确的,您可以创建一个 FileInputStream
并使用它。您可能想使用我们的 MicrophoneCaptureThread
作为参考。这将是一个非常相似的情况,除了你会使用你的 FileInputStream
而不是从麦克风读取。您可以在此处查看它(以及使用它的示例项目):https://github.com/watson-developer-cloud/android-sdk/blob/master/library/src/main/java/com/ibm/watson/developer_cloud/android/library/audio/MicrophoneCaptureThread.java
我在 IBM 开发者论坛上浏览了 post 之后才意识到,android sdk 从麦克风录音中读取字节并将它们写入 websocket。我现在正在尝试从内存中的音频文件中读取字节并将它们写入 websocket。我应该怎么做?到目前为止我有:
public class AudioCaptureThread extends Thread{
private static final String TAG = "AudioCaptureThread";
private boolean mStop = false;
private boolean mStopped = false;
private int mSamplingRate = -1;
private IAudioConsumer mIAudioConsumer = null;
// the thread receives high priority because it needs to do real time audio capture
// THREAD_PRIORITY_URGENT_AUDIO = "Standard priority of the most important audio threads"
public AudioCaptureThread(int iSamplingRate, IAudioConsumer IAudioConsumer) {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
mSamplingRate = iSamplingRate;
mIAudioConsumer = IAudioConsumer;
}
// once the thread is started it runs nonstop until it is stopped from the outside
@Override
public void run() {
File path = Activity.getContext.getExternalFilesDir(null);
File file = new File (path, "whatstheweatherlike.wav");
int length = (int) file.length();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] b = new byte[length];
FileInputStream in = null;
try {
in = new FileInputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
try {
for (int readNum; (readNum = in.read(b)) != -1;) {
bos.write(b, 0, readNum);
}
} catch (IOException e) {
e.printStackTrace();
}
byte[] bytes = bos.toByteArray();
mIAudioConsumer.consume(bytes);
}
但是,Activity.getContext无法识别。我可以在 MainActivity 中将文件转换为字节,但如何将其写入 websocket?我走在正确的轨道上还是这不是正确的方法?如果是,我该如何解决这个问题?
感谢任何帮助!
Activity.getContext
未被识别,因为没有对 Activity
的引用,因为它只是一个 Thread
。您必须传入 Activity
,但如果需要,传入 Context
可能更有意义。
您的想法是正确的,您可以创建一个 FileInputStream
并使用它。您可能想使用我们的 MicrophoneCaptureThread
作为参考。这将是一个非常相似的情况,除了你会使用你的 FileInputStream
而不是从麦克风读取。您可以在此处查看它(以及使用它的示例项目):https://github.com/watson-developer-cloud/android-sdk/blob/master/library/src/main/java/com/ibm/watson/developer_cloud/android/library/audio/MicrophoneCaptureThread.java