如何播放没有指向铃声音量的audio?

我有一个简单的声卡应用程序与几个不同类别的声音。 我正在使用AudioToolbox.framework(我的所有声音文件基本都在10秒钟内,都是.wav文件),但是我很困惑如何让我的应用程序引用“音量”音量,而不是“Ringer”音量。

如果我的设备设置为“无声”,即使设备音量打开,我的button也不会播放声音。 但是,只要打开“Ringer”(位于设备侧面),就会响起铃声。

我在网上search,发现一些来源,说我的AVAudioSession切换到AVAudioSessionCategoryPlayback,所以我粘贴

AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *setCategoryError = nil; BOOL success = [audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError]; if (!success) { /* handle the error condition */ } NSError *activationError = nil; success = [audioSession setActive:YES error:&activationError]; if (!success) { /* handle the error condition */ } 

到我的viewDidLoad。 但是,也会出现同样的问题。 我在网上发现了其他的build议,但是这些解释让我对我应该做的事感到困惑。 我对Objective-C和编码相对来说比较陌生,所以如果你知道答案,请在你的解释中具体清楚。

我很高兴地感谢您提供的任何帮助。

编辑一我跟随了Pau Senabre的build议,我没有注意到任何改变。 当林格静音时,audio仍然没有播放。 目前的代码与波城的变化:

  AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *sessionError = NULL; BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&sessionError]; if(!success) { NSLog(@"Error setting category Audio Session: %@", [sessionError localizedDescription]); } NSError *activationError = nil; success = [audioSession setActive:YES error:&activationError]; if (!success) { /* handle the error condition */ } 

还有其他build议吗?

编辑两我遵循了丹尼尔风暴的build议,谁build议我设置我的AVAudioSession类别Playback以稍微不同于以前的方式。 他build议我这样做

 [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil]; 

但这仍然让我的音量消失,而铃声是静音的。

我可能做错了什么?

以下是我正在播放audio的方式:

 NSURL *DariusSelectSound = [NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"DariusSelect" ofType:@"wav"]]; AudioServicesCreateSystemSoundID((__bridge CFURLRef)DariusSelectSound, &DariusSelectAudio); 

当我按下一个button,我用代码播放声音:

 AudioServicesPlaySystemSound(DariusSelectAudio); 

我的音量大约是一半,铃声的音量大致相同。 但是,调整我的铃声音量没有任何作用,通过调整我的铃声音量,audio变得更响或更安静。

编辑三问题已解决,有多种解决scheme。 Pau Senabre和Daniel Storm的build议都起作用。 我的问题是,当我需要AVAudioPlayer时,我正在尝试使用AudioServices 。 对不起,我犯了这样一个简单的错误,我非常感谢你的帮助!

将您的AVAudioSession类别设置为Playback 。 你想把它放在你的viewDidLoad

 // Play sound when silent mode on [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil]; 

AVAudioSession类参考

我不清楚,看来你想覆盖系统扬声器的音量设置,即在链的末尾,是剪切(剪辑)到铃声音量水平:

请在这里阅读:关于AVAudioSession setInputGain

 /* A value defined over the range [0.0, 1.0], with 0.0 corresponding to the lowest analog gain setting and 1.0 corresponding to the highest analog gain setting. Attempting to set values outside of the defined range will result in the value being "clamped" to a valid input. This is a global input gain setting that applies to the current input source for the entire system. When no applications are using the input gain control, the system will restore the default input gain setting for the input source. Note that some audio accessories, such as USB devices, may not have a default value. This property is only valid if inputGainSettable is true. Note: inputGain is key-value observable */ - (BOOL)setInputGain:(float)gain error:(NSError **)outError NS_AVAILABLE_IOS(6_0); @property(readonly) float inputGain NS_AVAILABLE_IOS(6_0); /* value in range [0.0, 1.0] */ 

说,有几种技术来改变你的声音的音量,这取决于你是AVAudioSessionAVAudioSessionOpenAL

假设你正在使用AUGraph ,在AUGraph有一个这样的混音器:

 // MIXER unit ASBD AudioComponentDescription MixerUnitDescription; MixerUnitDescription.componentType = kAudioUnitType_Mixer; MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer; MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; MixerUnitDescription.componentFlags = 0; MixerUnitDescription.componentFlagsMask = 0; 

 /// /// NODE 6: MIXER NODE /// err = AUGraphAddNode (processingGraph, &MixerUnitDescription, &mixerNode ); if (err) { NSLog(@"mixerNode err = %d", (int)err); return NO; } 

你会做的

 // sets the overall mixer output volume - (void)setOutputVolume:(AudioUnitParameterValue)value { OSStatus result; result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, value, 0); if (result) { NSLog(@"AudioUnitSetParameter kMultiChannelMixerParam_Volume Output result %d %08X %4.4s\n", (int)result, (unsigned int)result, (char*)&result); return; } } 

在这里,我想你至less有一个像这样的AUGraph

RemoteIO与audio单元

  ------------------------- | io | -- BUS 1 -- from mic --> | n REMOTE I/O u | -- BUS 1 -- to app --> | p AUDIO t | -- BUS 0 -- from app --> | u UNIT p | -- BUS 0 -- to speaker --> | tu | | t | ------------------------- 

当然,如果你需要这个链条,你也要照顾到input:

 // sets the input volume for a specific bus - (void)setInputVolume:(UInt32)inputBus value:(AudioUnitParameterValue)value { micGainLevel = value; OSStatus result; result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputBus, value, 0); if (result) { NSLog(@"AudioUnitSetParameter kMultiChannelMixerParam_Volume Input result %d %08X %4.4s\n", (int)result, (unsigned int)result, (char*)&result); } } 

说,如果你有WAV文件,你可以通过这个filter,然后你可以设置你的增益逐字节的文件:

  WavInFile *inFile = new WavInFile( cString ); // get some audio file channels info float samplerate = (int)(*inFile).getSampleRate(); int nChannels = (int)(*inFile).getNumChannels(); float nSamples = (*inFile).getNumSamples(); float duration = (double)(*inFile).getLengthMS() / (double) 1000; while (inFile->eof() == 0) { int num, samples; // Read a chunk of samples from the input file num = inFile->read(shortBuffer, BUFF_SIZE); samples = num / nChannels; seconds = (double)(*inFile).getElapsedMS() / (double) 1000; float currentFrequency = 0.0f; SInt16ToDouble(shortBuffer, doubleBuffer, samples); currentFrequency = dywapitch_computepitch(&(pitchtrackerFile), doubleBuffer, 0, samples); currentFrequency = samplerate / (float)44100.0f * currentFrequency; // here you can change things like the pitch at this point of the amplitude ie the volume see later } //// eof 

这是放大器

 short amplifyPCMSInt16(int value, int dbGain, bool clampValue) { /*To increase the gain of a sample by X db, multiply the PCM value by * pow( 2.0, X/6.014 ). ie gain +6dB means doubling the value of the sample, -6dB means halving it. */ int newValue = (int) ( pow(2.0, ((double)dbGain)/6.014 )*value); if(clampValue){ if(newValue>32767) newValue = 32767; else if(newValue < -32768 ) newValue = -32768; } return (short) newValue; } 

当然,这意味着读取文件并修改它直到读取EOF ,所以通常情况下,如果你真的需要这个文件存储的话,你可以这样做,否则你将会更好的设置一个AURenderCallbackStruct来实现这个function。 msec:

 AURenderCallbackStruct lineInrCallbackStruct = {}; lineInrCallbackStruct.inputProc = &micLineInCallback; lineInrCallbackStruct.inputProcRefCon = (void*)self; err = AudioUnitSetProperty( vfxUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &lineInrCallbackStruct, sizeof(lineInrCallbackStruct)); 

在这一点上,您可以实时和在这里控制audio

 static OSStatus micLineInCallback (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { } 

你可以乘你的缓冲区的左,右声道样本:

 SInt16 *sampleBufferLeft = THIS.conversionBufferLeft; SInt16 *sampleBufferRight = THIS.conversionBufferRight; SInt16 *sampleBuffer; double *doubleBuffer = THIS.doubleBufferMono; // start the actual processing inSamplesLeft = (SInt32 *) ioData->mBuffers[0].mData; // left channel fixedPointToSInt16(inSamplesLeft, sampleBufferLeft, inNumberFrames); if(isStereo) { inSamplesRight = (SInt32 *) ioData->mBuffers[1].mData; // right channel fixedPointToSInt16(inSamplesRight, sampleBufferRight, inNumberFrames); for( i = 0; i < inNumberFrames; i++ ) { // combine left and right channels into left sampleBufferLeft[i] = (SInt16) ((.5 * (float) sampleBufferLeft[i]) + (.5 * (float) sampleBufferRight[i])); } } 

有一些工作要做(把所有的东西放在一起),但是你有LR通道,就像在amplifyPCMSInt16函数中所显示的amplifyPCMSInt16

尝试将AVAudioSession的类别设置为AVAudioSessionCategoryOptionDefaultToSpeaker

  NSError *sessionError = NULL; BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&sessionError]; if(!success) { NSLog(@"Error setting category Audio Session: %@", [sessionError localizedDescription]); } 

您还可以尝试使用AudioComponentDescription和AudioStreamBasicDescription添加AudioUnitSetProperty

  // Create Audio Unit AudioComponentDescription cd = { .componentManufacturer = kAudioUnitManufacturer_Apple, .componentType = kAudioUnitType_Output, .componentSubType = kAudioUnitSubType_RemoteIO, .componentFlags = 0, .componentFlagsMask = 0 }; AudioComponent component = AudioComponentFindNext(NULL, &cd); OSStatus result = AudioComponentInstanceNew(component, &_ioUnit); NSCAssert2( result == noErr, @"AudioComponentInstanceNew failed. Error code: %d '%.4s'", (int)result, (const char *)(&result)); AudioStreamBasicDescription asbd = { .mFormatID = kAudioFormatLinearPCM, .mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonInterleaved, .mChannelsPerFrame = 2, .mBytesPerPacket = sizeof(SInt16), .mFramesPerPacket = 1, .mBytesPerFrame = sizeof(SInt16), .mBitsPerChannel = 8 * sizeof(SInt16), .mSampleRate = 1 }; result = AudioUnitSetProperty( _ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbd, sizeof(asbd)); NSCAssert2( result == noErr, @"Set Stream Format failed. Error code: %d '%.4s'", (int)result, (const char *)(&result)); // Set Audio Callback AURenderCallbackStruct ioRemoteInput; ioRemoteInput.inputProc = audioCallback; result = AudioUnitSetProperty( _ioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &ioRemoteInput, sizeof(ioRemoteInput)); NSCAssert2( result == noErr, @"Could not set Render Callback. Error code: %d '%.4s'", (int)result, (const char *)(&result)); // Initialize Audio Unit result = AudioUnitInitialize(_ioUnit); NSCAssert2( result == noErr, @"Initializing Audio Unit failed. Error code: %d '%.4s'", (int)result, (const char *)(&result));