NaN 会导致此核心音频 iOS 应用偶尔崩溃吗?
could NaN be causing the occasional crash in this core audio iOS app?
我的第一个应用综合了 music audio from a sine look-up table using methods deprecated since iOS 6. I have just revised it to address warnings about AudioSession
helped by this blog 和关于 AVFoundationFramework 的 Apple 指南。音频会话警告现已得到解决,应用程序可以像以前一样生成音频。它目前在 iOS 9.
下运行
但是,该应用偶尔会无故崩溃。我检查了 this SO post 但它似乎处理访问而不是生成原始音频数据,所以它可能没有处理时间问题。我怀疑存在缓冲问题,但在更改或微调代码中的任何内容之前,我需要了解这可能是什么。
我有最后期限向用户提供修改后的应用程序,因此我非常感谢收到处理过类似问题的人的来信。
问题来了。该应用在模拟器报告中进入调试状态:
com.apple.coreaudio.AQClient (8):EXC_BAD_ACCESS (code=1, address=0xffffffff10626000)
在调试导航器中,线程 8 (com.apple.coreaudio.AQClient (8)
),它报告:
0 -[Synth fillBuffer:frames:]
1 -[PlayView audioBufferPlayer:fillBuffer:format:]
2 playCallback
fillBuffer 中的这行代码被高亮显示
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
...audioBufferPlayer中的这行代码也是如此
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
...和 playCallBack
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
这里是audioBufferPlayer的代码(delegate,与above中的demo基本相同)。
- (void)audioBufferPlayer:(AudioBufferPlayer*)audioBufferPlayer fillBuffer:(AudioQueueBufferRef)buffer format:(AudioStreamBasicDescription)audioFormat
{
[synthLock lock];
int packetsPerBuffer = buffer->mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
buffer->mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;
[synthLock unlock];
}
...(在 myViewController 中初始化)
- (id)init
{
if ((self = [super init])) {
// The audio buffer is managed (filled up etc.) within its own thread (Audio Queue thread)
// Since we are also responding to changes from the GUI, we need a lock so both threads
// do not attempt to change the same value independently.
synthLock = [[NSLock alloc] init];
// Synth and the AudioBufferPlayer must use the same sample rate.
float sampleRate = 44100.0f;
// Initialise synth to fill the audio buffer with audio samples.
synth = [[Synth alloc] initWithSampleRate:sampleRate];
// Initialise note buttons
buttons = [[NSMutableArray alloc] init];
// Initialise the audio buffer.
player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate channels:1 bitsPerChannel:16 packetsPerBuffer:1024];
player.delegate = self;
player.gain = 0.9f;
[[AVAudioSession sharedInstance] setActive:YES error:nil];
}
return self;
} // initialisation
... 以及 playCallback
static void playCallback( void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inBuffer)
{
AudioBufferPlayer* player = (AudioBufferPlayer*) inUserData;
if (player.playing){
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
}
}
...这里是用于合成音频的 fillBuffer 的代码
- (int)fillBuffer:(void*)buffer frames:(int)frames
{
SInt16* p = (SInt16*)buffer;
// Loop through the frames (or "block size"), then consider each sample for each tone.
for (int f = 0; f < frames; ++f)
{
float m = 0.0f; // the mixed value for this frame
for (int n = 0; n < MAX_TONE_EVENTS; ++n)
{
if (tones[n].state == STATE_INACTIVE) // only active tones
continue;
// recalculate a 30sec envelope and place in a look-up table
// Longer notes need to interpolate through the envelope
int a = (int)tones[n].envStep; // integer part (like a floored float)
float b = tones[n].envStep - a; // decimal part (like doing a modulo)
// c allows us to calculate if we need to wrap around
int c = a + 1; // (like a ceiling of integer part)
if (c >= envLength) c = a; // don't wrap around
/////////////// LOOK UP ENVELOPE TABLE /////////////////
// uses table look-up with interpolation for both level and pitch envelopes
// 'b' is a value interpolated between 2 successive samples 'a' and 'c')
// first, read values for the level envelope
float envValue = (1.0f - b)*tones[n].levelEnvelope[a] + b*tones[n].levelEnvelope[c];
// then the pitch envelope
float pitchFactorValue = (1.0f - b)*tones[n].pitchEnvelope[a] + b*tones[n].pitchEnvelope[c];
// Advance envelope pointer one step
tones[n].envStep += tones[n].envDelta;
// Turn note off at the end of the envelope.
if (((int)tones[n].envStep) >= envLength){
tones[n].state = STATE_INACTIVE;
continue;
}
// Precalculated Sine look-up table
a = (int)tones[n].phase; // integer part
b = tones[n].phase - a; // decimal part
c = a + 1;
if (c >= sineLength) c -= sineLength; // wrap around
///////////////// LOOK UP OF SINE TABLE ///////////////////
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
// Wrap round when we get to the end of the sine look-up table.
tones[n].phase += (tones[n].frequency * pitchFactorValue); // calculate frequency for each point in the pitch envelope
if (((int)tones[n].phase) >= sineLength)
tones[n].phase -= sineLength;
////////////////// RAMP NOTE OFF IF IT HAS BEEN UNPRESSED
if (tones[n].state == STATE_UNPRESSED) {
tones[n].gain -= 0.0001;
if ( tones[n].gain <= 0 ) {
tones[n].state = STATE_INACTIVE;
}
}
//////////////// FINAL SAMPLE VALUE ///////////////////
float s = sineValue * envValue * gain * tones[n].gain;
// Clip the signal, if needed.
if (s > 1.0f) s = 1.0f;
else if (s < -1.0f) s = -1.0f;
// Add the sample to the out-going signal
m += s;
}
// Write the sample mix to the buffer as a 16-bit word.
p[f] = (SInt16)(m * 0x7FFF);
}
return frames;
}
我不确定它是否是一个转移注意力的问题,但我在几个调试寄存器中遇到了 NaN。它似乎是在计算 fillBuffer
中正弦查找的相位增量时发生的(见上文)。该计算是以 44.1 kHz 的采样率对每个样本进行多达十几个部分的计算,并在 iOS 4 中在 iPhone 4 中工作。我在 运行 的模拟器上 iOS 9. 我所做的唯一更改在此 post!
中进行了描述
事实证明,我的 NaN 问题与 Core Audio 没有直接关系。这是由我的代码的另一个区域的更改引入的边缘条件引起的。真正的问题是在实时计算声音包络的持续时间时试图除以零。
但是,在尝试确定该问题的原因时,我确信我的 iOS 7 之前的音频会话已被基于 AVFoundation 的工作设置所取代。感谢我的初始代码来源 Matthijs Hollemans and also to Mario Diana,其博客解释了所需的更改。
起初,我的 iPhone 上的声级明显低于模拟器上的声级,铸造厂 here 解决了这个问题。我发现有必要通过替换 Mario 的
来包含这些改进
- (BOOL)setUpAudioSession
与铸造厂的
- (void)configureAVAudioSession
希望这可能对其他人有所帮助。
我的第一个应用综合了 music audio from a sine look-up table using methods deprecated since iOS 6. I have just revised it to address warnings about AudioSession
helped by this blog 和关于 AVFoundationFramework 的 Apple 指南。音频会话警告现已得到解决,应用程序可以像以前一样生成音频。它目前在 iOS 9.
但是,该应用偶尔会无故崩溃。我检查了 this SO post 但它似乎处理访问而不是生成原始音频数据,所以它可能没有处理时间问题。我怀疑存在缓冲问题,但在更改或微调代码中的任何内容之前,我需要了解这可能是什么。
我有最后期限向用户提供修改后的应用程序,因此我非常感谢收到处理过类似问题的人的来信。
问题来了。该应用在模拟器报告中进入调试状态:
com.apple.coreaudio.AQClient (8):EXC_BAD_ACCESS (code=1, address=0xffffffff10626000)
在调试导航器中,线程 8 (com.apple.coreaudio.AQClient (8)
),它报告:
0 -[Synth fillBuffer:frames:]
1 -[PlayView audioBufferPlayer:fillBuffer:format:]
2 playCallback
fillBuffer 中的这行代码被高亮显示
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
...audioBufferPlayer中的这行代码也是如此
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
...和 playCallBack
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
这里是audioBufferPlayer的代码(delegate,与above中的demo基本相同)。
- (void)audioBufferPlayer:(AudioBufferPlayer*)audioBufferPlayer fillBuffer:(AudioQueueBufferRef)buffer format:(AudioStreamBasicDescription)audioFormat
{
[synthLock lock];
int packetsPerBuffer = buffer->mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
buffer->mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;
[synthLock unlock];
}
...(在 myViewController 中初始化)
- (id)init
{
if ((self = [super init])) {
// The audio buffer is managed (filled up etc.) within its own thread (Audio Queue thread)
// Since we are also responding to changes from the GUI, we need a lock so both threads
// do not attempt to change the same value independently.
synthLock = [[NSLock alloc] init];
// Synth and the AudioBufferPlayer must use the same sample rate.
float sampleRate = 44100.0f;
// Initialise synth to fill the audio buffer with audio samples.
synth = [[Synth alloc] initWithSampleRate:sampleRate];
// Initialise note buttons
buttons = [[NSMutableArray alloc] init];
// Initialise the audio buffer.
player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate channels:1 bitsPerChannel:16 packetsPerBuffer:1024];
player.delegate = self;
player.gain = 0.9f;
[[AVAudioSession sharedInstance] setActive:YES error:nil];
}
return self;
} // initialisation
... 以及 playCallback
static void playCallback( void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inBuffer)
{
AudioBufferPlayer* player = (AudioBufferPlayer*) inUserData;
if (player.playing){
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
}
}
...这里是用于合成音频的 fillBuffer 的代码
- (int)fillBuffer:(void*)buffer frames:(int)frames
{
SInt16* p = (SInt16*)buffer;
// Loop through the frames (or "block size"), then consider each sample for each tone.
for (int f = 0; f < frames; ++f)
{
float m = 0.0f; // the mixed value for this frame
for (int n = 0; n < MAX_TONE_EVENTS; ++n)
{
if (tones[n].state == STATE_INACTIVE) // only active tones
continue;
// recalculate a 30sec envelope and place in a look-up table
// Longer notes need to interpolate through the envelope
int a = (int)tones[n].envStep; // integer part (like a floored float)
float b = tones[n].envStep - a; // decimal part (like doing a modulo)
// c allows us to calculate if we need to wrap around
int c = a + 1; // (like a ceiling of integer part)
if (c >= envLength) c = a; // don't wrap around
/////////////// LOOK UP ENVELOPE TABLE /////////////////
// uses table look-up with interpolation for both level and pitch envelopes
// 'b' is a value interpolated between 2 successive samples 'a' and 'c')
// first, read values for the level envelope
float envValue = (1.0f - b)*tones[n].levelEnvelope[a] + b*tones[n].levelEnvelope[c];
// then the pitch envelope
float pitchFactorValue = (1.0f - b)*tones[n].pitchEnvelope[a] + b*tones[n].pitchEnvelope[c];
// Advance envelope pointer one step
tones[n].envStep += tones[n].envDelta;
// Turn note off at the end of the envelope.
if (((int)tones[n].envStep) >= envLength){
tones[n].state = STATE_INACTIVE;
continue;
}
// Precalculated Sine look-up table
a = (int)tones[n].phase; // integer part
b = tones[n].phase - a; // decimal part
c = a + 1;
if (c >= sineLength) c -= sineLength; // wrap around
///////////////// LOOK UP OF SINE TABLE ///////////////////
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
// Wrap round when we get to the end of the sine look-up table.
tones[n].phase += (tones[n].frequency * pitchFactorValue); // calculate frequency for each point in the pitch envelope
if (((int)tones[n].phase) >= sineLength)
tones[n].phase -= sineLength;
////////////////// RAMP NOTE OFF IF IT HAS BEEN UNPRESSED
if (tones[n].state == STATE_UNPRESSED) {
tones[n].gain -= 0.0001;
if ( tones[n].gain <= 0 ) {
tones[n].state = STATE_INACTIVE;
}
}
//////////////// FINAL SAMPLE VALUE ///////////////////
float s = sineValue * envValue * gain * tones[n].gain;
// Clip the signal, if needed.
if (s > 1.0f) s = 1.0f;
else if (s < -1.0f) s = -1.0f;
// Add the sample to the out-going signal
m += s;
}
// Write the sample mix to the buffer as a 16-bit word.
p[f] = (SInt16)(m * 0x7FFF);
}
return frames;
}
我不确定它是否是一个转移注意力的问题,但我在几个调试寄存器中遇到了 NaN。它似乎是在计算 fillBuffer
中正弦查找的相位增量时发生的(见上文)。该计算是以 44.1 kHz 的采样率对每个样本进行多达十几个部分的计算,并在 iOS 4 中在 iPhone 4 中工作。我在 运行 的模拟器上 iOS 9. 我所做的唯一更改在此 post!
事实证明,我的 NaN 问题与 Core Audio 没有直接关系。这是由我的代码的另一个区域的更改引入的边缘条件引起的。真正的问题是在实时计算声音包络的持续时间时试图除以零。
但是,在尝试确定该问题的原因时,我确信我的 iOS 7 之前的音频会话已被基于 AVFoundation 的工作设置所取代。感谢我的初始代码来源 Matthijs Hollemans and also to Mario Diana,其博客解释了所需的更改。
起初,我的 iPhone 上的声级明显低于模拟器上的声级,铸造厂 here 解决了这个问题。我发现有必要通过替换 Mario 的
来包含这些改进 - (BOOL)setUpAudioSession
与铸造厂的
- (void)configureAVAudioSession
希望这可能对其他人有所帮助。