NaN可能会导致这个核心audioiOS应用程序偶尔崩溃?
我的第一个应用程序使用自iOS 6以来不推荐使用的方法, 从正弦查找表中合成音乐audio 。我刚刚修改了它,以解决由此博客和Apple基于AVFoundationFramework指南的AudioSession有关的警告。 audio会话警告现在已经解决,应用程序产生的audio像以前一样。 它目前运行在iOS 9下。
然而,应用程序偶尔崩溃没有明显的原因。 我检查了这个SOpost,但它似乎处理访问而不是生成原始audio数据,所以也许它不是处理时间问题。 我怀疑有一个缓冲的问题,但我需要了解这可能是在我更改或微调代码中的任何东西之前。
我有一个截止date,使修改后的应用程序可以提供给用户,所以我非常感谢来自处理类似问题的人。
这是问题。 该应用程序进入模拟器报告debugging:
com.apple.coreaudio.AQClient (8):EXC_BAD_ACCESS (code=1, address=0xffffffff10626000)
在Debug Navigator的Thread 8( com.apple.coreaudio.AQClient (8)
)中,它报告:
0 -[Synth fillBuffer:frames:] 1 -[PlayView audioBufferPlayer:fillBuffer:format:] 2 playCallback
fillBuffer中的这一行代码被突出显示
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
…这是audioBufferPlayer中的这一行代码
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
…和playCallBack
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
这里是audioBufferPlayer的代码(代表,基本上和上面提到的demo一样)。
- (void)audioBufferPlayer:(AudioBufferPlayer*)audioBufferPlayer fillBuffer:(AudioQueueBufferRef)buffer format:(AudioStreamBasicDescription)audioFormat { [synthLock lock]; int packetsPerBuffer = buffer->mAudioDataBytesCapacity / audioFormat.mBytesPerPacket; int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer]; buffer->mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket; [synthLock unlock]; }
…(在myViewController中初始化)
- (id)init { if ((self = [super init])) { // The audio buffer is managed (filled up etc.) within its own thread (Audio Queue thread) // Since we are also responding to changes from the GUI, we need a lock so both threads // do not attempt to change the same value independently. synthLock = [[NSLock alloc] init]; // Synth and the AudioBufferPlayer must use the same sample rate. float sampleRate = 44100.0f; // Initialise synth to fill the audio buffer with audio samples. synth = [[Synth alloc] initWithSampleRate:sampleRate]; // Initialise note buttons buttons = [[NSMutableArray alloc] init]; // Initialise the audio buffer. player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate channels:1 bitsPerChannel:16 packetsPerBuffer:1024]; player.delegate = self; player.gain = 0.9f; [[AVAudioSession sharedInstance] setActive:YES error:nil]; } return self; } // initialisation
…和playCallback
static void playCallback( void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inBuffer) { AudioBufferPlayer* player = (AudioBufferPlayer*) inUserData; if (player.playing){ [player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat]; AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL); } }
…这里是audio合成的fillBuffer的代码
- (int)fillBuffer:(void*)buffer frames:(int)frames { SInt16* p = (SInt16*)buffer; // Loop through the frames (or "block size"), then consider each sample for each tone. for (int f = 0; f < frames; ++f) { float m = 0.0f; // the mixed value for this frame for (int n = 0; n < MAX_TONE_EVENTS; ++n) { if (tones[n].state == STATE_INACTIVE) // only active tones continue; // recalculate a 30sec envelope and place in a look-up table // Longer notes need to interpolate through the envelope int a = (int)tones[n].envStep; // integer part (like a floored float) float b = tones[n].envStep - a; // decimal part (like doing a modulo) // c allows us to calculate if we need to wrap around int c = a + 1; // (like a ceiling of integer part) if (c >= envLength) c = a; // don't wrap around /////////////// LOOK UP ENVELOPE TABLE ///////////////// // uses table look-up with interpolation for both level and pitch envelopes // 'b' is a value interpolated between 2 successive samples 'a' and 'c') // first, read values for the level envelope float envValue = (1.0f - b)*tones[n].levelEnvelope[a] + b*tones[n].levelEnvelope[c]; // then the pitch envelope float pitchFactorValue = (1.0f - b)*tones[n].pitchEnvelope[a] + b*tones[n].pitchEnvelope[c]; // Advance envelope pointer one step tones[n].envStep += tones[n].envDelta; // Turn note off at the end of the envelope. if (((int)tones[n].envStep) >= envLength){ tones[n].state = STATE_INACTIVE; continue; } // Precalculated Sine look-up table a = (int)tones[n].phase; // integer part b = tones[n].phase - a; // decimal part c = a + 1; if (c >= sineLength) c -= sineLength; // wrap around ///////////////// LOOK UP OF SINE TABLE /////////////////// float sineValue = (1.0f - b)*sine[a] + b*sine[c]; // Wrap round when we get to the end of the sine look-up table. tones[n].phase += (tones[n].frequency * pitchFactorValue); // calculate frequency for each point in the pitch envelope if (((int)tones[n].phase) >= sineLength) tones[n].phase -= sineLength; ////////////////// RAMP NOTE OFF IF IT HAS BEEN UNPRESSED if (tones[n].state == STATE_UNPRESSED) { tones[n].gain -= 0.0001; if ( tones[n].gain <= 0 ) { tones[n].state = STATE_INACTIVE; } } //////////////// FINAL SAMPLE VALUE /////////////////// float s = sineValue * envValue * gain * tones[n].gain; // Clip the signal, if needed. if (s > 1.0f) s = 1.0f; else if (s < -1.0f) s = -1.0f; // Add the sample to the out-going signal m += s; } // Write the sample mix to the buffer as a 16-bit word. p[f] = (SInt16)(m * 0x7FFF); } return frames; }
我不确定它是否是一个红鲱鱼,但我在几个debugging寄存器中遇到了NaN。 在计算fillBuffer
正弦查找的相位增量时,似乎发生了这种情况(参见上文)。 这个计算是在每个采样频率为44.1 kHz的情况下完成的,对于每个采样可达十几个偏差,并在iPhone 4上运行在iOS 4上。我在iOS 9的模拟器上运行。我所做的唯一更改在本文中描述!
我的NaN问题竟然与Core Audio没有直接关系。 这是由我的代码的另一个区域的变化引起的边缘条件引起的。 真正的问题是在实时计算声音包络的持续时间的同时尝试零分。
但是,为了确定问题的原因,我相信我的iOS 7之前的audio会话已经被基于AVFoundation的工作设置取代。 感谢我的初始代码Matthijs Hollemans以及Mario Diana的博客,他的博客解释了所需的更改。
起初,我的iPhone上的声压级比模拟器上的声压级要低得多, 这是代工厂在这里提出的一个问题。 我发现有必要通过取代马里奥来包含这些改进
- (BOOL)setUpAudioSession
与铸造的
- (void)configureAVAudioSession
希望这可能会帮助别人。