通过套接字连接在 iOS 上播放音频
Playing Audio on iOS from Socket connection
希望你能帮助我解决这个问题,我已经看到很多与此相关的问题,但是 none 确实帮助我弄清楚我在这里做错了什么。
所以 Android 我有一个 AudioRecord,它正在录制音频并通过套接字连接将音频作为字节数组发送到客户端。这部分在 Android 上非常简单,并且工作完美。
当我开始使用 iOS 时,我发现没有简单的方法来解决这个问题,所以经过 2 天的研究和即插即用,这就是我得到的。仍然不播放任何音频。它在启动时会发出噪音,但正在播放通过插座传输的 none 音频。我通过记录缓冲区数组中的每个元素确认套接字正在接收数据。
这是我正在使用的所有代码,很多是从一堆网站中重复使用的,记不住所有的链接。 (顺便说一句,使用 AudioUnits)
首先,音频处理器:
播放回调
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
}
return noErr;
}
音频处理器初始化
-(void)initializeAudio
{
OSStatus status;
// We define the audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output; // we want to ouput
desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
desc.componentFlags = 0; // must be zero
desc.componentFlagsMask = 0; // must be zero
desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
// find the AU component by description
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
[self hasError:status:__FILE__:__LINE__];
// define that we want record io on the input bus
UInt32 flag = 1;
// define that we want play on io on the output bus
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Output, // scope to output
kOutputBus, // select output bus (0)
&flag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
/*
We need to specifie our format on which we want to work.
We use Linear PCM cause its uncompressed and we work on raw data.
for more informations check.
We want 16 bits, 2 bytes per packet/frames at 44khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// set the format on the output stream
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
/**
We need to define a callback structure which holds
a pointer to the recordingCallback and a reference to
the audio processor object
*/
AURenderCallbackStruct callbackStruct;
/*
We do the same on the output stream to hear what is coming
from the input stream
*/
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
// set playbackCallback as callback on our renderer for the output bus
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
// reset flag to 0
flag = 0;
/*
we need to tell the audio unit to allocate the render buffer,
that we can directly write into it.
*/
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
/*
we set the number of channels to mono and allocate our block size to
1024 bytes.
*/
audioBuffer.mNumberChannels = 1;
audioBuffer.mDataByteSize = 512 * 2;
audioBuffer.mData = malloc( 512 * 2 );
// Initialize the Audio Unit and cross fingers =)
status = AudioUnitInitialize(audioUnit);
[self hasError:status:__FILE__:__LINE__];
NSLog(@"Started");
}
开始播放
-(void)start;
{
// start the audio unit. You should hear something, hopefully <img src="http://www.stefanpopp.de/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
OSStatus status = AudioOutputUnitStart(audioUnit);
[self hasError:status:__FILE__:__LINE__];
}
正在向缓冲区添加数据
-(void)processBuffer: (AudioBufferList*) audioBufferList
{
AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];
// we check here if the input data byte size has changed
if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
// clear old buffer
free(audioBuffer.mData);
// assing new byte size and allocate them on mData
audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// loop over every packet
// copy incoming audio data to the audio buffer
memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}
流连接回调(Socket)
-(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
if(eventCode == NSStreamEventHasBytesAvailable)
{
if(aStream == inputStream) {
uint8_t buffer[1024];
UInt32 len;
while ([inputStream hasBytesAvailable]) {
len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)];
if(len > 0)
{
AudioBuffer abuffer;
abuffer.mDataByteSize = len; // sample size
abuffer.mNumberChannels = 1; // one channel
abuffer.mData = buffer;
int16_t audioBuffer[len];
for(int i = 0; i <= len; i++)
{
audioBuffer[i] = MuLaw_Decode(buffer[i]);
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = abuffer;
NSLog(@"%", bufferList.mBuffers[0]);
[audioProcessor processBuffer:&bufferList];
}
}
}
}
}
MuLaw_Decode
#define MULAW_BIAS 33
int16_t MuLaw_Decode(uint8_t number)
{
uint8_t sign = 0, position = 0;
int16_t decoded = 0;
number =~ number;
if(number&0x80)
{
number&=~(1<<7);
sign = -1;
}
position= ((number & 0xF0) >> 4) + 5;
decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS;
return (sign == 0) ? decoded : (-(decoded));
}
以及打开连接和初始化音频处理器的代码
CFReadStreamRef readStream;
CFWriteStreamRef writeStream;
CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"10.0.0.14", 6000, &readStream, &writeStream);
inputStream = (__bridge_transfer NSInputStream *)readStream;
outputStream = (__bridge_transfer NSOutputStream *)writeStream;
[inputStream setDelegate:self];
[outputStream setDelegate:self];
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];
audioProcessor = [[AudioProcessor alloc] init];
[audioProcessor start];
[audioProcessor setGain:1];
我认为我的代码中的问题与套接字连接回调有关,我没有对数据做正确的事情。
我最后解决了,看我的回答here
我本来打算把代码放在这里,但是复制粘贴会很多
希望你能帮助我解决这个问题,我已经看到很多与此相关的问题,但是 none 确实帮助我弄清楚我在这里做错了什么。
所以 Android 我有一个 AudioRecord,它正在录制音频并通过套接字连接将音频作为字节数组发送到客户端。这部分在 Android 上非常简单,并且工作完美。
当我开始使用 iOS 时,我发现没有简单的方法来解决这个问题,所以经过 2 天的研究和即插即用,这就是我得到的。仍然不播放任何音频。它在启动时会发出噪音,但正在播放通过插座传输的 none 音频。我通过记录缓冲区数组中的每个元素确认套接字正在接收数据。
这是我正在使用的所有代码,很多是从一堆网站中重复使用的,记不住所有的链接。 (顺便说一句,使用 AudioUnits)
首先,音频处理器: 播放回调
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
}
return noErr;
}
音频处理器初始化
-(void)initializeAudio
{
OSStatus status;
// We define the audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output; // we want to ouput
desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
desc.componentFlags = 0; // must be zero
desc.componentFlagsMask = 0; // must be zero
desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
// find the AU component by description
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
[self hasError:status:__FILE__:__LINE__];
// define that we want record io on the input bus
UInt32 flag = 1;
// define that we want play on io on the output bus
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Output, // scope to output
kOutputBus, // select output bus (0)
&flag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
/*
We need to specifie our format on which we want to work.
We use Linear PCM cause its uncompressed and we work on raw data.
for more informations check.
We want 16 bits, 2 bytes per packet/frames at 44khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// set the format on the output stream
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
/**
We need to define a callback structure which holds
a pointer to the recordingCallback and a reference to
the audio processor object
*/
AURenderCallbackStruct callbackStruct;
/*
We do the same on the output stream to hear what is coming
from the input stream
*/
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
// set playbackCallback as callback on our renderer for the output bus
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
// reset flag to 0
flag = 0;
/*
we need to tell the audio unit to allocate the render buffer,
that we can directly write into it.
*/
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
/*
we set the number of channels to mono and allocate our block size to
1024 bytes.
*/
audioBuffer.mNumberChannels = 1;
audioBuffer.mDataByteSize = 512 * 2;
audioBuffer.mData = malloc( 512 * 2 );
// Initialize the Audio Unit and cross fingers =)
status = AudioUnitInitialize(audioUnit);
[self hasError:status:__FILE__:__LINE__];
NSLog(@"Started");
}
开始播放
-(void)start;
{
// start the audio unit. You should hear something, hopefully <img src="http://www.stefanpopp.de/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
OSStatus status = AudioOutputUnitStart(audioUnit);
[self hasError:status:__FILE__:__LINE__];
}
正在向缓冲区添加数据
-(void)processBuffer: (AudioBufferList*) audioBufferList
{
AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];
// we check here if the input data byte size has changed
if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
// clear old buffer
free(audioBuffer.mData);
// assing new byte size and allocate them on mData
audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// loop over every packet
// copy incoming audio data to the audio buffer
memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}
流连接回调(Socket)
-(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
if(eventCode == NSStreamEventHasBytesAvailable)
{
if(aStream == inputStream) {
uint8_t buffer[1024];
UInt32 len;
while ([inputStream hasBytesAvailable]) {
len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)];
if(len > 0)
{
AudioBuffer abuffer;
abuffer.mDataByteSize = len; // sample size
abuffer.mNumberChannels = 1; // one channel
abuffer.mData = buffer;
int16_t audioBuffer[len];
for(int i = 0; i <= len; i++)
{
audioBuffer[i] = MuLaw_Decode(buffer[i]);
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = abuffer;
NSLog(@"%", bufferList.mBuffers[0]);
[audioProcessor processBuffer:&bufferList];
}
}
}
}
}
MuLaw_Decode
#define MULAW_BIAS 33
int16_t MuLaw_Decode(uint8_t number)
{
uint8_t sign = 0, position = 0;
int16_t decoded = 0;
number =~ number;
if(number&0x80)
{
number&=~(1<<7);
sign = -1;
}
position= ((number & 0xF0) >> 4) + 5;
decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS;
return (sign == 0) ? decoded : (-(decoded));
}
以及打开连接和初始化音频处理器的代码
CFReadStreamRef readStream;
CFWriteStreamRef writeStream;
CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"10.0.0.14", 6000, &readStream, &writeStream);
inputStream = (__bridge_transfer NSInputStream *)readStream;
outputStream = (__bridge_transfer NSOutputStream *)writeStream;
[inputStream setDelegate:self];
[outputStream setDelegate:self];
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];
audioProcessor = [[AudioProcessor alloc] init];
[audioProcessor start];
[audioProcessor setGain:1];
我认为我的代码中的问题与套接字连接回调有关,我没有对数据做正确的事情。
我最后解决了,看我的回答here
我本来打算把代码放在这里,但是复制粘贴会很多