如何将 UnsafeMutablePointer<UnsafeMutablePointer<Float>> 变量转换为 AudioBufferList?
How to convert that UnsafeMutablePointer<UnsafeMutablePointer<Float>> variable into AudioBufferList?
我的 Swift 项目中有这个 EZAudio 方法,用于从麦克风捕获音频:
func microphone(microphone: EZMicrophone!, hasAudioReceived bufferList: UnsafeMutablePointer<UnsafeMutablePointer<Float>>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {
}
但我真正需要的是 "bufferList" 参数作为 AudioBufferList 类型传入,以便通过套接字发送这些音频数据包,就像我在 Objective C 中所做的那样:
//Objective C pseudocode:
for(int i = 0; i < bufferList.mNumberBuffers; ++i){
AudioBuffer buffer = bufferList.mBuffers[i];
audio = ["audio": NSData(bytes: buffer.mData, length: Int(buffer.mDataByteSize))];
socket.emit("message", audio);
}
如何将 UnsafeMutablePointer> 变量转换为 AudioBufferList?
我相信您会创建一个 AudioBufferList
指针并使用 memory
函数的结果。
let audioBufferList = UnsafePointer<AudioBufferList>(bufferList).memory
我能够将麦克风中的音频流式传输到插座中,如下所示:
func microphone(microphone: EZMicrophone!, hasBufferList bufferList: UnsafeMutablePointer<AudioBufferList>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {
let blist:AudioBufferList=bufferList[0]
let buffer:AudioBuffer = blist.mBuffers
let audio = ["audio": NSData(bytes: buffer.mData, length: Int(buffer.mDataByteSize))];
socket.emit("message", audio);//this socket comes from Foundation framework
}
这个通用的 AudioStreamDescriptor 设置对我有用,您可能需要根据自己的需要对其进行调整或省略某些部分,例如波形动画:
func initializeStreaming() {
var streamDescription:AudioStreamBasicDescription=AudioStreamBasicDescription()
streamDescription.mSampleRate = 16000.0
streamDescription.mFormatID = kAudioFormatLinearPCM
streamDescription.mFramesPerPacket = 1
streamDescription.mChannelsPerFrame = 1
streamDescription.mBytesPerFrame = streamDescription.mChannelsPerFrame * 2
streamDescription.mBytesPerPacket = streamDescription.mFramesPerPacket * streamDescription.mBytesPerFram
streamDescription.mBitsPerChannel = 16
streamDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger
microphone = EZMicrophone(microphoneDelegate: self, withAudioStreamBasicDescription: sstreamDescription, startsImmediately: false)
waveview?.plotType=EZPlotType.Buffer
waveview?.shouldFill = false
waveview?.shouldMirror = false
}
得到这个东西很复杂运行,祝你好运!
我的 Swift 项目中有这个 EZAudio 方法,用于从麦克风捕获音频:
func microphone(microphone: EZMicrophone!, hasAudioReceived bufferList: UnsafeMutablePointer<UnsafeMutablePointer<Float>>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {
}
但我真正需要的是 "bufferList" 参数作为 AudioBufferList 类型传入,以便通过套接字发送这些音频数据包,就像我在 Objective C 中所做的那样:
//Objective C pseudocode:
for(int i = 0; i < bufferList.mNumberBuffers; ++i){
AudioBuffer buffer = bufferList.mBuffers[i];
audio = ["audio": NSData(bytes: buffer.mData, length: Int(buffer.mDataByteSize))];
socket.emit("message", audio);
}
如何将 UnsafeMutablePointer> 变量转换为 AudioBufferList?
我相信您会创建一个 AudioBufferList
指针并使用 memory
函数的结果。
let audioBufferList = UnsafePointer<AudioBufferList>(bufferList).memory
我能够将麦克风中的音频流式传输到插座中,如下所示:
func microphone(microphone: EZMicrophone!, hasBufferList bufferList: UnsafeMutablePointer<AudioBufferList>, withBufferSize bufferSize: UInt32, withNumberOfChannels numberOfChannels: UInt32) {
let blist:AudioBufferList=bufferList[0]
let buffer:AudioBuffer = blist.mBuffers
let audio = ["audio": NSData(bytes: buffer.mData, length: Int(buffer.mDataByteSize))];
socket.emit("message", audio);//this socket comes from Foundation framework
}
这个通用的 AudioStreamDescriptor 设置对我有用,您可能需要根据自己的需要对其进行调整或省略某些部分,例如波形动画:
func initializeStreaming() {
var streamDescription:AudioStreamBasicDescription=AudioStreamBasicDescription()
streamDescription.mSampleRate = 16000.0
streamDescription.mFormatID = kAudioFormatLinearPCM
streamDescription.mFramesPerPacket = 1
streamDescription.mChannelsPerFrame = 1
streamDescription.mBytesPerFrame = streamDescription.mChannelsPerFrame * 2
streamDescription.mBytesPerPacket = streamDescription.mFramesPerPacket * streamDescription.mBytesPerFram
streamDescription.mBitsPerChannel = 16
streamDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger
microphone = EZMicrophone(microphoneDelegate: self, withAudioStreamBasicDescription: sstreamDescription, startsImmediately: false)
waveview?.plotType=EZPlotType.Buffer
waveview?.shouldFill = false
waveview?.shouldMirror = false
}
得到这个东西很复杂运行,祝你好运!