9在Swift上实现Nuance语音识别,不能听onResult,onError …事件

我有我的Speech Recon项目的两个部分Nuance,一个模块(ObjectiveC)的.h文件和一个ViewController (swift)。

我想在我的swift viewController设置一个viewController对象,并且监听onBegin,onStop …等方法。

使其编译的唯一方法是使用nil作为委托参数来初始化SpeechRecon对象。 显然这不好,因为我的onStart和onFinish函数不会触发。

我已经实现了SKRecogniser文件的协议,并将我的ViewController类扩展到SKReconDelegate …但是如果我使用“self”作为委托初始化对象,编译器会说UIViewController不是一个有效的类。 我知道我需要在两个类之间build立一些委托,但我是一个Android开发人员,而我的iOS技能还不够尖锐。 这里是代码,如果我错过了一些重要的东西,只是让我知道。 我会非常感谢你的帮助。

 //ViewController code, in SWIFT //NO PROTOCOLS NEEDED HERE! class ViewController: UIViewController, SpeechKitDelegate, SKRecognizerDelegate{ override func viewDidLoad() { super.viewDidLoad() SpeechKit.setupWithID( "NMDPTRIAL_nuance_chch_com9999", host:"sandbox.nmdp.nuancemility.net", port:443, useSSL:false, delegate:self) //error said "self" is of an invalid ViewController type :( because I was NOT implementing all 4 methods BELOW: } //a bit ahead, I have the same problem with a button @IBAction func btnmicaction(sender: AnyObject) { self.voiceSearch=SKRecognizer(type: "websearch", detection: 2, language: langType as String, delegate: self) //error said "self" is of an invalid ViewController type :( because I was NOT implementing all 4 methods BELOW: } //IMPLEMENT ALL THESE 4 FUNCTIONS, AS SUGGESTED BY THE SOLUTION func recognizerDidBeginRecording(recognizer:SKRecognizer){ println("************** ReconBeganRecording") } func recognizerDidFinishRecording(recognizer:SKRecognizer){ println("************** ReconFinishedRecording") } func recognizer(recognizer: SKRecognizer!, didFinishWithResults results: SKRecognition!){ //The voice recognition process has understood something } func recognizer(recognizer: SKRecognizer!, didFinishWithError error: NSError!, suggestion: String!){ //an error has occurred } } 

以防万一,这是我的桥头:

 #ifndef Vanilla_Bridge_h #define Vanilla_Bridge_h #import <SpeechKit/SpeechKit.h> 

更新下面的解决scheme!

这里是我有桥头:

 #import <SpeechKit/SpeechKit.h> #import "NuanceHeader.h" 

NuanceHeader.h:

 #import <Foundation/Foundation.h> @interface NuanceHeader : NSObject @end 

NuanceHeader.m

 #import "NuanceHeader.h" const unsigned char SpeechKitApplicationKey[] = {...}; @implementation NuanceHeader @end 

当涉及到使用所有这些的UIViewController:

 class MyViewController: UIViewController, SpeechKitDelegate, SKRecognizerDelegate { var voiceSearch: SKRecognizer? override func viewDidLoad() { //Setup SpeechKit SpeechKit.setupWithID("...", host: "sandbox.nmdp.nuancemobility.net", port: 443, useSSL: false, delegate: self) } func someAction() { self.voiceSearch = SKRecognizer(type: SKSearchRecognizerType, detection: UInt(SKLongEndOfSpeechDetection), language:"eng-USA", delegate: self) } func recognizerDidBeginRecording(recognizer: SKRecognizer!) { //The recording has started } func recognizerDidFinishRecording(recognizer: SKRecognizer!) { //The recording has stopped } func recognizer(recognizer: SKRecognizer!, didFinishWithResults results: SKRecognition!) { //The voice recognition process has understood something } func recognizer(recognizer: SKRecognizer!, didFinishWithError error: NSError!, suggestion: String!) { //an error has occurred } } 

没有别的东西可以检查每一步,这个部分非常简单

尝试let objCDelegate = self as SKRecognizerDelegate ,然后使用objCDelegate作为委托参数

由于事情有所改变,我想我会增加我的2美分:

  var listening = false var transaction: SKTransaction? var session: SKSession? override func viewDidLoad() { super.viewDidLoad() session = SKSession(URL: NSURL(string: serverURL), appToken: appKey) let audioFormat = SKPCMFormat() audioFormat.sampleFormat = .SignedLinear16; audioFormat.sampleRate = 16000; audioFormat.channels = 1; print("\(NSHomeDirectory())/start.mp3") // Attach them to the session session!.startEarcon = SKAudioFile(URL: NSURL(fileURLWithPath: "\(NSHomeDirectory())/start.mp3"), pcmFormat: audioFormat) session!.endEarcon = SKAudioFile(URL: NSURL(fileURLWithPath: "\(NSHomeDirectory())/stop.mp3"), pcmFormat: audioFormat) } @IBAction func speechButtonDidClick(sender: AnyObject) { if listening == false { transaction = session?.recognizeWithType(SKTransactionSpeechTypeDictation, detection: .Short, language: "eng-USA", delegate: self) }else{ transaction?.stopRecording() } } // SKTransactionDelegate func transactionDidBeginRecording(transaction: SKTransaction!) { messageText.text = "listening" listening = true indicator.startAnimating() startPollingVolume() } func transactionDidFinishRecording(transaction: SKTransaction!) { messageText.text = "stopped" listening = false indicator.stopAnimating() stopPollingVolume() } func transaction(transaction: SKTransaction!, didReceiveRecognition recognition: SKRecognition!) { print("got something") //Take the best result if recognition.text != nil{ speechTextField.text = recognition.text } } func transaction(transaction: SKTransaction!, didReceiveServiceResponse response: [NSObject : AnyObject]!) { print ("service response") print(response) } func transaction(transaction: SKTransaction!, didFinishWithSuggestion suggestion: String!) { } func transaction(transaction: SKTransaction!, didFailWithError error: NSError!, suggestion: String!) { print ("error") print(error) } var timer = NSTimer() var interval = 0.01; func startPollingVolume() { timer = NSTimer.scheduledTimerWithTimeInterval(interval, target: self, selector: #selector(ViewController.pollVolume), userInfo: nil, repeats: true) } func pollVolume() { if transaction != nil{ let volumeLevel:Float = transaction!.audioLevel audioLevelIndicator.progress = volumeLevel / 90 } } func stopPollingVolume() { timer.invalidate() audioLevelIndicator.progress = 0 } 

希望这可以帮助别人!