speechRecognizer 不返回答案
speechRecognizer not returning answer
我正在学习一些关于 iOS 10 语音识别的教程 API (https://code.tutsplus.com/tutorials/using-the-speech-recognition-api-in-ios-10--cms-28032?ec_unit=translation-info-language)
我的版本不起作用。语音输入没有文本响应。
我遵循了教程,但我不得不进行一些更改(显然 Swift 的较新版本不接受与教程中完全相同的某些代码行)。
你们能告诉我一些关于它如何以及为什么不起作用的想法吗?
这是我的方法 运行:
func startRecording() {
// Setup audio engine and speech recognizer
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
// Prepare and start recording
audioEngine.prepare()
do {
try audioEngine.start()
self.status = .recognizing
} catch {
return print(error)
}
// Analyze the speech
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
}
调试时,speechRecognizer 和 recognitionTask 都没有 nil 值。
这就是我在 ViewController:
上定义变量的方式
let audioEngine = AVAudioEngine()
let speechRecognizer: SFSpeechRecognizer? = SFSpeechRecognizer()
let request = SFSpeechAudioBufferRecognitionRequest()
var recognitionTask: SFSpeechRecognitionTask?
工作设置:测试于 2017 iPad、iOS 11.4。
Xcode9.4.1,Swift4.1.
谢谢!
这个问题是因为 AVAudioSession
没有设置为 Record
。试试这个。
在视图控制器中 添加
let audioSession = AVAudioSession.sharedInstance()
你的最终方法将是。
func startRecording() {
//Change / Edit Start
do {
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
} catch {
print("audioSession properties weren't set because of an error.")
}
//Change / Edit Finished
// Setup audio engine and speech recognizer
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
// Prepare and start recording
audioEngine.prepare()
do {
try audioEngine.start()
self.status = .recognizing
} catch {
return print(error)
}
// Analyze the speech
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
}
将以下内容添加到您现有的代码中:
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result, result.isFinal {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
我正在学习一些关于 iOS 10 语音识别的教程 API (https://code.tutsplus.com/tutorials/using-the-speech-recognition-api-in-ios-10--cms-28032?ec_unit=translation-info-language) 我的版本不起作用。语音输入没有文本响应。 我遵循了教程,但我不得不进行一些更改(显然 Swift 的较新版本不接受与教程中完全相同的某些代码行)。 你们能告诉我一些关于它如何以及为什么不起作用的想法吗?
这是我的方法 运行:
func startRecording() {
// Setup audio engine and speech recognizer
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
// Prepare and start recording
audioEngine.prepare()
do {
try audioEngine.start()
self.status = .recognizing
} catch {
return print(error)
}
// Analyze the speech
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
}
调试时,speechRecognizer 和 recognitionTask 都没有 nil 值。
这就是我在 ViewController:
上定义变量的方式let audioEngine = AVAudioEngine()
let speechRecognizer: SFSpeechRecognizer? = SFSpeechRecognizer()
let request = SFSpeechAudioBufferRecognitionRequest()
var recognitionTask: SFSpeechRecognitionTask?
工作设置:测试于 2017 iPad、iOS 11.4。 Xcode9.4.1,Swift4.1.
谢谢!
这个问题是因为 AVAudioSession
没有设置为 Record
。试试这个。
在视图控制器中 添加
let audioSession = AVAudioSession.sharedInstance()
你的最终方法将是。
func startRecording() {
//Change / Edit Start
do {
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
} catch {
print("audioSession properties weren't set because of an error.")
}
//Change / Edit Finished
// Setup audio engine and speech recognizer
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
// Prepare and start recording
audioEngine.prepare()
do {
try audioEngine.start()
self.status = .recognizing
} catch {
return print(error)
}
// Analyze the speech
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
}
将以下内容添加到您现有的代码中:
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result, result.isFinal {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})