Google 云语音 API 响应:正在解析 iOS

Google cloud speech API response : Parsing iOS

我正在尝试将 google 云语音 API 集成到我的演示应用程序中。 我得到的结果如下:

    {
    results {
      alternatives {
        transcript: "hello"
      }
      stability: 0.01
    }
}

获取响应的代码:

[[SpeechRecognitionService sharedInstance] streamAudioData:self.audioData
                                                withCompletion:^(StreamingRecognizeResponse *response, NSError *error) {
                                                  if (error) {
                                                    NSLog(@"ERROR: %@", error);
                                                    _textView.text = [error localizedDescription];
                                                    [self stopAudio:nil];
                                                  } else if (response) {
                                                    BOOL finished = NO;
                                                    //NSLog(@"RESPONSE: %@", response.resultsArray);
                                                    for (StreamingRecognitionResult *result in response.resultsArray) {
                                                        NSLog(@"result : %@",result);
                                                        //_textView.text = result.alternatives.transcript;
                                                      if (result.isFinal) {
                                                        finished = YES;
                                                      }
                                                    }

                                                    if (finished) {
                                                      [self stopAudio:nil];
                                                    }
                                                  }
                                                }
     ];

我的问题是,我得到的响应不正确 JSON 那么我如何获得键 transcript 的值?任何帮助,将不胜感激。谢谢。

对于正在寻找此问题解决方案的人:

for (StreamingRecognitionResult *result in response.resultsArray) {
    for (StreamingRecognitionResult *alternative in result.alternativesArray) {
        _textView.text = [NSString stringWithFormat:@"%@",[alternative valueForKey:@"transcript"]];
    }
    if (result.isFinal) {
        finished = YES;
    }
}

这就是我为连续获取 transcript 的值所做的。

这是解决您在 Swift4/iOS11.2.5 上的问题的代码,尽情享受吧!:

SpeechRecognitionService.sharedInstance.streamAudioData(audioData, completion:
{ [weak self] (response, error) in
    guard let strongSelf = self else {
        return
    }
    if let error = error {
        print("*** Streaming ASR ERROR: "+error.localizedDescription)
    } else if let response = response {
        for result in response.resultsArray {
            print("result i: ")  //log to console
            print(result)
            if let alternative = result as? StreamingRecognitionResult {
                for a in alternative.alternativesArray{
                    if let ai = a as? SpeechRecognitionAlternative{
                        print("alternative i: ")  //log to console
                        print(ai)
                        if(alternative.isFinal){
                            print("*** FINAL ASR result: "+ai.transcript)
                            strongSelf.stopGoogleStreamingASR(strongSelf)
                        }
                        else{
                            print("*** PARTIAL ASR result: "+ai.transcript)
                        }
                    }
                }
                
            }
            else{
                print("ERROR: let alternative = result as? StreamingRecognitionResult")
            }
        }
    }
})