9在 Swift 上实施 Nuance 语音识别,无法收听 onResult、onError... 事件
9Implementing Nuance Speech Recognition on Swift, cannot listen to onResult, onError... events
我的 Speech Recon 项目有两个部分,一个是模块 (ObjectiveC) 的 .h 文件,另一个是 ViewController
(swift)。
我想在我的swiftviewController
里设置一个SpeechRecognition
object,监听onBegin,onStop...之类的方法。
使其编译的唯一方法是使用 nil 作为委托参数来初始化 SpeechRecon object。显然这不好,因为我的 onStart... 和 onFinish 函数不会触发。
我已经为 SKRecogniser
文件实现了一个协议,并将我的 ViewController class 扩展到 SKReconDelegate...但是如果我使用 "self" 作为委托初始化 object,编译器会说 UIViewController
不是有效的 class。我知道我需要在两个 class 之间建立一些委托,但我是一名 android 开发人员,我的 iOS 技能仍然不够敏锐。
这是代码,如果我错过了一些重要的部分,请告诉我。
我将非常感谢你的帮助。
//ViewController code, in SWIFT
//NO PROTOCOLS NEEDED HERE!
class ViewController: UIViewController, SpeechKitDelegate, SKRecognizerDelegate{
override func viewDidLoad() {
super.viewDidLoad()
SpeechKit.setupWithID( "NMDPTRIAL_nuance_chch_com9999",
host:"sandbox.nmdp.nuancemility.net",
port:443,
useSSL:false,
delegate:self) //error said "self" is of an invalid ViewController type :( because I was NOT implementing all 4 methods BELOW:
}
//a bit ahead, I have the same problem with a button
@IBAction func btnmicaction(sender: AnyObject) {
self.voiceSearch=SKRecognizer(type: "websearch", detection: 2, language: langType as String, delegate: self) //error said "self" is of an invalid ViewController type :( because I was NOT implementing all 4 methods BELOW:
}
//IMPLEMENT ALL THESE 4 FUNCTIONS, AS SUGGESTED BY THE SOLUTION
func recognizerDidBeginRecording(recognizer:SKRecognizer){
println("************** ReconBeganRecording")
}
func recognizerDidFinishRecording(recognizer:SKRecognizer){
println("************** ReconFinishedRecording")
}
func recognizer(recognizer: SKRecognizer!, didFinishWithResults results: SKRecognition!){
//The voice recognition process has understood something
}
func recognizer(recognizer: SKRecognizer!, didFinishWithError error: NSError!, suggestion: String!){
//an error has occurred
}
}
以防万一,这是我的桥 header:
#ifndef Vanilla_Bridge_h
#define Vanilla_Bridge_h
#import <SpeechKit/SpeechKit.h>
更新
请参阅下面的解决方案!!
尝试 let objCDelegate = self as SKRecognizerDelegate
然后使用 objCDelegate
作为委托参数
这是我得到的
桥接头:
#import <SpeechKit/SpeechKit.h>
#import "NuanceHeader.h"
NuanceHeader.h:
#import <Foundation/Foundation.h>
@interface NuanceHeader : NSObject
@end
NuanceHeader.m
#import "NuanceHeader.h"
const unsigned char SpeechKitApplicationKey[] = {...};
@implementation NuanceHeader
@end
当涉及到使用所有这些的 UIViewController 时:
class MyViewController: UIViewController, SpeechKitDelegate, SKRecognizerDelegate
{
var voiceSearch: SKRecognizer?
override func viewDidLoad()
{
//Setup SpeechKit
SpeechKit.setupWithID("...", host: "sandbox.nmdp.nuancemobility.net", port: 443, useSSL: false, delegate: self)
}
func someAction()
{
self.voiceSearch = SKRecognizer(type: SKSearchRecognizerType, detection: UInt(SKLongEndOfSpeechDetection), language:"eng-USA", delegate: self)
}
func recognizerDidBeginRecording(recognizer: SKRecognizer!)
{
//The recording has started
}
func recognizerDidFinishRecording(recognizer: SKRecognizer!)
{
//The recording has stopped
}
func recognizer(recognizer: SKRecognizer!, didFinishWithResults results: SKRecognition!)
{
//The voice recognition process has understood something
}
func recognizer(recognizer: SKRecognizer!, didFinishWithError error: NSError!, suggestion: String!)
{
//an error has occurred
}
}
没有别的,检查每一步,这部分很简单
由于情况发生了一些变化,我想我会加 2 美分:
var listening = false
var transaction: SKTransaction?
var session: SKSession?
override func viewDidLoad() {
super.viewDidLoad()
session = SKSession(URL: NSURL(string: serverURL), appToken: appKey)
let audioFormat = SKPCMFormat()
audioFormat.sampleFormat = .SignedLinear16;
audioFormat.sampleRate = 16000;
audioFormat.channels = 1;
print("\(NSHomeDirectory())/start.mp3")
// Attach them to the session
session!.startEarcon = SKAudioFile(URL: NSURL(fileURLWithPath: "\(NSHomeDirectory())/start.mp3"), pcmFormat: audioFormat)
session!.endEarcon = SKAudioFile(URL: NSURL(fileURLWithPath: "\(NSHomeDirectory())/stop.mp3"), pcmFormat: audioFormat)
}
@IBAction func speechButtonDidClick(sender: AnyObject) {
if listening == false {
transaction = session?.recognizeWithType(SKTransactionSpeechTypeDictation,
detection: .Short,
language: "eng-USA",
delegate: self)
}else{
transaction?.stopRecording()
}
}
// SKTransactionDelegate
func transactionDidBeginRecording(transaction: SKTransaction!) {
messageText.text = "listening"
listening = true
indicator.startAnimating()
startPollingVolume()
}
func transactionDidFinishRecording(transaction: SKTransaction!) {
messageText.text = "stopped"
listening = false
indicator.stopAnimating()
stopPollingVolume()
}
func transaction(transaction: SKTransaction!, didReceiveRecognition recognition: SKRecognition!) {
print("got something")
//Take the best result
if recognition.text != nil{
speechTextField.text = recognition.text
}
}
func transaction(transaction: SKTransaction!, didReceiveServiceResponse response: [NSObject : AnyObject]!) {
print ("service response")
print(response)
}
func transaction(transaction: SKTransaction!, didFinishWithSuggestion suggestion: String!) {
}
func transaction(transaction: SKTransaction!, didFailWithError error: NSError!, suggestion: String!) {
print ("error")
print(error)
}
var timer = NSTimer()
var interval = 0.01;
func startPollingVolume() {
timer = NSTimer.scheduledTimerWithTimeInterval(interval,
target: self,
selector: #selector(ViewController.pollVolume),
userInfo: nil,
repeats: true)
}
func pollVolume() {
if transaction != nil{
let volumeLevel:Float = transaction!.audioLevel
audioLevelIndicator.progress = volumeLevel / 90
}
}
func stopPollingVolume() {
timer.invalidate()
audioLevelIndicator.progress = 0
}
希望这对某人有所帮助!
我的 Speech Recon 项目有两个部分,一个是模块 (ObjectiveC) 的 .h 文件,另一个是 ViewController
(swift)。
我想在我的swiftviewController
里设置一个SpeechRecognition
object,监听onBegin,onStop...之类的方法。
使其编译的唯一方法是使用 nil 作为委托参数来初始化 SpeechRecon object。显然这不好,因为我的 onStart... 和 onFinish 函数不会触发。
我已经为 SKRecogniser
文件实现了一个协议,并将我的 ViewController class 扩展到 SKReconDelegate...但是如果我使用 "self" 作为委托初始化 object,编译器会说 UIViewController
不是有效的 class。我知道我需要在两个 class 之间建立一些委托,但我是一名 android 开发人员,我的 iOS 技能仍然不够敏锐。
这是代码,如果我错过了一些重要的部分,请告诉我。
我将非常感谢你的帮助。
//ViewController code, in SWIFT
//NO PROTOCOLS NEEDED HERE!
class ViewController: UIViewController, SpeechKitDelegate, SKRecognizerDelegate{
override func viewDidLoad() {
super.viewDidLoad()
SpeechKit.setupWithID( "NMDPTRIAL_nuance_chch_com9999",
host:"sandbox.nmdp.nuancemility.net",
port:443,
useSSL:false,
delegate:self) //error said "self" is of an invalid ViewController type :( because I was NOT implementing all 4 methods BELOW:
}
//a bit ahead, I have the same problem with a button
@IBAction func btnmicaction(sender: AnyObject) {
self.voiceSearch=SKRecognizer(type: "websearch", detection: 2, language: langType as String, delegate: self) //error said "self" is of an invalid ViewController type :( because I was NOT implementing all 4 methods BELOW:
}
//IMPLEMENT ALL THESE 4 FUNCTIONS, AS SUGGESTED BY THE SOLUTION
func recognizerDidBeginRecording(recognizer:SKRecognizer){
println("************** ReconBeganRecording")
}
func recognizerDidFinishRecording(recognizer:SKRecognizer){
println("************** ReconFinishedRecording")
}
func recognizer(recognizer: SKRecognizer!, didFinishWithResults results: SKRecognition!){
//The voice recognition process has understood something
}
func recognizer(recognizer: SKRecognizer!, didFinishWithError error: NSError!, suggestion: String!){
//an error has occurred
}
}
以防万一,这是我的桥 header:
#ifndef Vanilla_Bridge_h
#define Vanilla_Bridge_h
#import <SpeechKit/SpeechKit.h>
更新 请参阅下面的解决方案!!
尝试 let objCDelegate = self as SKRecognizerDelegate
然后使用 objCDelegate
作为委托参数
这是我得到的 桥接头:
#import <SpeechKit/SpeechKit.h>
#import "NuanceHeader.h"
NuanceHeader.h:
#import <Foundation/Foundation.h>
@interface NuanceHeader : NSObject
@end
NuanceHeader.m
#import "NuanceHeader.h"
const unsigned char SpeechKitApplicationKey[] = {...};
@implementation NuanceHeader
@end
当涉及到使用所有这些的 UIViewController 时:
class MyViewController: UIViewController, SpeechKitDelegate, SKRecognizerDelegate
{
var voiceSearch: SKRecognizer?
override func viewDidLoad()
{
//Setup SpeechKit
SpeechKit.setupWithID("...", host: "sandbox.nmdp.nuancemobility.net", port: 443, useSSL: false, delegate: self)
}
func someAction()
{
self.voiceSearch = SKRecognizer(type: SKSearchRecognizerType, detection: UInt(SKLongEndOfSpeechDetection), language:"eng-USA", delegate: self)
}
func recognizerDidBeginRecording(recognizer: SKRecognizer!)
{
//The recording has started
}
func recognizerDidFinishRecording(recognizer: SKRecognizer!)
{
//The recording has stopped
}
func recognizer(recognizer: SKRecognizer!, didFinishWithResults results: SKRecognition!)
{
//The voice recognition process has understood something
}
func recognizer(recognizer: SKRecognizer!, didFinishWithError error: NSError!, suggestion: String!)
{
//an error has occurred
}
}
没有别的,检查每一步,这部分很简单
由于情况发生了一些变化,我想我会加 2 美分:
var listening = false
var transaction: SKTransaction?
var session: SKSession?
override func viewDidLoad() {
super.viewDidLoad()
session = SKSession(URL: NSURL(string: serverURL), appToken: appKey)
let audioFormat = SKPCMFormat()
audioFormat.sampleFormat = .SignedLinear16;
audioFormat.sampleRate = 16000;
audioFormat.channels = 1;
print("\(NSHomeDirectory())/start.mp3")
// Attach them to the session
session!.startEarcon = SKAudioFile(URL: NSURL(fileURLWithPath: "\(NSHomeDirectory())/start.mp3"), pcmFormat: audioFormat)
session!.endEarcon = SKAudioFile(URL: NSURL(fileURLWithPath: "\(NSHomeDirectory())/stop.mp3"), pcmFormat: audioFormat)
}
@IBAction func speechButtonDidClick(sender: AnyObject) {
if listening == false {
transaction = session?.recognizeWithType(SKTransactionSpeechTypeDictation,
detection: .Short,
language: "eng-USA",
delegate: self)
}else{
transaction?.stopRecording()
}
}
// SKTransactionDelegate
func transactionDidBeginRecording(transaction: SKTransaction!) {
messageText.text = "listening"
listening = true
indicator.startAnimating()
startPollingVolume()
}
func transactionDidFinishRecording(transaction: SKTransaction!) {
messageText.text = "stopped"
listening = false
indicator.stopAnimating()
stopPollingVolume()
}
func transaction(transaction: SKTransaction!, didReceiveRecognition recognition: SKRecognition!) {
print("got something")
//Take the best result
if recognition.text != nil{
speechTextField.text = recognition.text
}
}
func transaction(transaction: SKTransaction!, didReceiveServiceResponse response: [NSObject : AnyObject]!) {
print ("service response")
print(response)
}
func transaction(transaction: SKTransaction!, didFinishWithSuggestion suggestion: String!) {
}
func transaction(transaction: SKTransaction!, didFailWithError error: NSError!, suggestion: String!) {
print ("error")
print(error)
}
var timer = NSTimer()
var interval = 0.01;
func startPollingVolume() {
timer = NSTimer.scheduledTimerWithTimeInterval(interval,
target: self,
selector: #selector(ViewController.pollVolume),
userInfo: nil,
repeats: true)
}
func pollVolume() {
if transaction != nil{
let volumeLevel:Float = transaction!.audioLevel
audioLevelIndicator.progress = volumeLevel / 90
}
}
func stopPollingVolume() {
timer.invalidate()
audioLevelIndicator.progress = 0
}
希望这对某人有所帮助!