使用 SFSpeechRecognizer 的正确方法?
Proper way to use SFSpeechRecognizer?
我正在尝试使用 SFSpeechRecognizer
但我没有办法测试我是否正确实施它,而且由于它是一个相对较新的 class 我找不到示例代码(我不知道swift)。我是不是做了什么不可原谅的事mistakes/missing?
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status){
if (status == SFSpeechRecognizerAuthorizationStatusAuthorized) {
SFSpeechRecognizer* recognizer = [[SFSpeechRecognizer alloc] init];
recognizer.delegate = self;
SFSpeechAudioBufferRecognitionRequest* request = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
request.contextualStrings = @[@"data", @"bank", @"databank"];
SFSpeechRecognitionTask* task = [recognizer recognitionTaskWithRequest:request resultHandler:^(SFSpeechRecognitionResult* result, NSError* error){
SFTranscription* transcript = result.bestTranscription;
NSLog(@"%@", transcript);
}];
}
}];
我也在尝试,但这段代码对我有用,毕竟 SFSpeechRecognizer 和 SFSpeechAudioBufferRecognitionRequest 不一样,所以我认为(尚未测试)你必须请求不同的权限(你之前是否请求过权限? 使用麦克风和语音识别?)。好的,这是代码:
//Available over iOS 10, only for maximum 1 minute, need internet connection; can be sourced from an audio recorded file or over the microphone
NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:@"es-MX"];
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:local];
NSString *soundFilePath = [myDir stringByAppendingPathComponent:@"/sound.m4a"];
NSURL *url = [[NSURL alloc] initFileURLWithPath:soundFilePath];
if(!speechRecognizer.isAvailable)
NSLog(@"speechRecognizer is not available, maybe it has no internet connection");
SFSpeechURLRecognitionRequest *urlRequest = [[SFSpeechURLRecognitionRequest alloc] initWithURL:url];
urlRequest.shouldReportPartialResults = YES; // YES if animate writting
[speechRecognizer recognitionTaskWithRequest: urlRequest resultHandler: ^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error)
{
NSString *transcriptText = result.bestTranscription.formattedString;
if(!error)
{
NSLog(@"transcriptText");
}
}];
我正在尝试使用 SFSpeechRecognizer
但我没有办法测试我是否正确实施它,而且由于它是一个相对较新的 class 我找不到示例代码(我不知道swift)。我是不是做了什么不可原谅的事mistakes/missing?
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status){
if (status == SFSpeechRecognizerAuthorizationStatusAuthorized) {
SFSpeechRecognizer* recognizer = [[SFSpeechRecognizer alloc] init];
recognizer.delegate = self;
SFSpeechAudioBufferRecognitionRequest* request = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
request.contextualStrings = @[@"data", @"bank", @"databank"];
SFSpeechRecognitionTask* task = [recognizer recognitionTaskWithRequest:request resultHandler:^(SFSpeechRecognitionResult* result, NSError* error){
SFTranscription* transcript = result.bestTranscription;
NSLog(@"%@", transcript);
}];
}
}];
我也在尝试,但这段代码对我有用,毕竟 SFSpeechRecognizer 和 SFSpeechAudioBufferRecognitionRequest 不一样,所以我认为(尚未测试)你必须请求不同的权限(你之前是否请求过权限? 使用麦克风和语音识别?)。好的,这是代码:
//Available over iOS 10, only for maximum 1 minute, need internet connection; can be sourced from an audio recorded file or over the microphone
NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:@"es-MX"];
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:local];
NSString *soundFilePath = [myDir stringByAppendingPathComponent:@"/sound.m4a"];
NSURL *url = [[NSURL alloc] initFileURLWithPath:soundFilePath];
if(!speechRecognizer.isAvailable)
NSLog(@"speechRecognizer is not available, maybe it has no internet connection");
SFSpeechURLRecognitionRequest *urlRequest = [[SFSpeechURLRecognitionRequest alloc] initWithURL:url];
urlRequest.shouldReportPartialResults = YES; // YES if animate writting
[speechRecognizer recognitionTaskWithRequest: urlRequest resultHandler: ^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error)
{
NSString *transcriptText = result.bestTranscription.formattedString;
if(!error)
{
NSLog(@"transcriptText");
}
}];