检测是否将耳机(不是麦克风)插入iOS设备

我需要改变我的audio取决于是否插入耳机。我知道kAudioSessionProperty_AudioInputAvailable,这将告诉我,如果有一个麦克风,但我想testing任何耳机,不只是内置耳机,在麦克风。 这可能吗?

这里是我自己的一个方法,它是在这个网站上find的稍微修改过的版本: http : //www.iphonedevsdk.com/forum/iphone-sdk-development/9982-play-record-same-time.html

- (BOOL)isHeadsetPluggedIn { UInt32 routeSize = sizeof (CFStringRef); CFStringRef route; OSStatus error = AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &routeSize, &route); /* Known values of route: * "Headset" * "Headphone" * "Speaker" * "SpeakerAndMicrophone" * "HeadphonesAndMicrophone" * "HeadsetInOut" * "ReceiverAndMicrophone" * "Lineout" */ if (!error && (route != NULL)) { NSString* routeStr = (NSString*)route; NSRange headphoneRange = [routeStr rangeOfString : @"Head"]; if (headphoneRange.location != NSNotFound) return YES; } return NO; } 

这是一个基于rob mayoff评论的解决scheme:

 - (BOOL)isHeadsetPluggedIn { AVAudioSessionRouteDescription *route = [[AVAudioSession sharedInstance] currentRoute]; BOOL headphonesLocated = NO; for( AVAudioSessionPortDescription *portDescription in route.outputs ) { headphonesLocated |= ( [portDescription.portType isEqualToString:AVAudioSessionPortHeadphones] ); } return headphonesLocated; } 

只需链接到AVFoundation框架。

只是为了这个职位的未来读者的头。

大多数AVToolbox方法已经被iOS 7的发行版所弃用,因此audio监听器现在已经非常冗余了

我从jpsetung开始给出了上面给出的代码,但是对于我的用例还是有一些问题:

  • 文档中没有任何名为kAudioSessionProperty_AudioRoute证据
  • 泄漏route
  • 没有audio会话检查
  • string检查耳机,而不是类别的逻辑意识
  • 我更感兴趣的是iPhone是否在使用其扬声器,“耳机”是指“扬声器以外的任何东西”。 我觉得留下像“蓝牙”,“airplay”,或“lineout”的选项是危险的。

这个实现扩大了检查以允许任何types的指定输出:

 BOOL isAudioRouteAvailable(CFStringRef routeType) { /* As of iOS 5: kAudioSessionOutputRoute_LineOut; kAudioSessionOutputRoute_Headphones; kAudioSessionOutputRoute_BluetoothHFP; kAudioSessionOutputRoute_BluetoothA2DP; kAudioSessionOutputRoute_BuiltInReceiver; kAudioSessionOutputRoute_BuiltInSpeaker; kAudioSessionOutputRoute_USBAudio; kAudioSessionOutputRoute_HDMI; kAudioSessionOutputRoute_AirPlay; */ //Prep BOOL foundRoute = NO; CFDictionaryRef description = NULL; //Session static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ AudioSessionInitialize(NULL, NULL, NULL, NULL); }); //Property UInt32 propertySize; AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &propertySize); OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &propertySize, &description); if ( !error && description ) { CFArrayRef outputs = CFDictionaryGetValue(description, kAudioSession_AudioRouteKey_Outputs); CFIndex count = CFArrayGetCount(outputs); if ( outputs && count ) { for (CFIndex i = 0; i < count; i++) { CFDictionaryRef route = CFArrayGetValueAtIndex(outputs, i); CFStringRef type = CFDictionaryGetValue(route, kAudioSession_AudioRouteKey_Type); NSLog(@"Got audio route %@", type); //Audio route type if ( CFStringCompare(type, routeType, 0) == kCFCompareEqualTo ) { foundRoute = YES; break; } } } } else if ( error ) { NSLog(@"Audio route error %ld", error); } //Cleanup if ( description ) { CFRelease(description); } //Done return foundRoute; } 

像这样使用:

 if ( isAudioRouteAvailable(kAudioSessionOutputRoute_BuiltInSpeaker) ) { //Do great things... }