Swift:在 swift 中尝试将语音更改为 iPhone 的声音时,iPhone 的音量很低

Swift: iPhone's volume is low when trying to change speech to iPhone's voice in swift

我正在尝试语音识别样本。如果我开始通过麦克风识别我的语音,那么我会尝试获取该识别文本的 iPhone 的声音。这是工作。但是,声音太小了。你能指导我吗?

而不是,如果我尝试简单的按钮操作,使用 AVSpeechUtterance 代码,音量是正常的。

之后,如果我使用 startRecognise() 方法,音量太低了。

我的代码

func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance()  //2
    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioSession.setMode(AVAudioSessionModeDefault)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
    }
    catch
    {
        print("audioSession properties weren't set because of an error.")
    }
    recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    guard let recognitionRequest = recognitionRequest else {
        fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
    }
    recognitionRequest.shouldReportPartialResults = true
    recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        if result != nil
        {
            let lastword = result?.bestTranscription.formattedString.components(separatedBy: " ").last
            if lastword == "repeat" || lastword == "Repeat"{
                self.myUtterance2 = AVSpeechUtterance(string: "You have spoken repeat")
                self.myUtterance2.rate = 0.4
                self.myUtterance2.volume = 1.0
                self.myUtterance2.pitchMultiplier = 1.0
                self.synth1.speak(self.myUtterance2)
                // HERE VOICE IS TOO LOW. 
            }
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)  //11
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
    self.recognitionRequest?.append(buffer)
    }
    audioEngine.prepare()
    do 
    {
        try audioEngine.start()
    } 
    catch 
    {
        print("audioEngine couldn't start because of an error.")
    }
}

我的按钮操作

func buttonAction()
{
   self.myUtterance2 = AVSpeechUtterance(string: "You are in button action")
   self.myUtterance2.rate = 0.4
   self.myUtterance2.volume = 1.0
   self.myUtterance2.pitchMultiplier = 1.0
   self.synth1.speak(self.myUtterance2)
   // Before going for startRecognise() method, 
   //I tried with buttonAction(), 
   //this time volume is normal. 
   //After startRecognise() method call, volume is too low in both methods.
}

终于找到解决方案了

func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance()  //2
    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioSession.setMode(AVAudioSessionModeDefault)
        //try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
    }
    catch
    {
        print("audioSession properties weren't set because of an error.")
    }

    ... 
}

一旦我评论了这一行,try audioSession.setMode(AVAudioSessionModeMeasurement),音量就正常了。

在深入研究技术细节后发现,overrideOutputAudioPort() 临时更改了当前音频路由。

func overrideOutputAudioPort(_ portOverride: AVAudioSession.PortOverride) throws

如果您的应用使用 playAndRecord 类别,使用 AVAudioSession.PortOverride.speaker 选项调用此方法会导致音频路由到 built-in speakermicrophone,而不管其他设置如何.

此更改仅在当前路由更改或您使用 AVAudioSession.PortOverride.none 选项再次调用此方法之前一直有效。

try audioSession.setMode(AVAudioSessionModeDefault)

如果您希望 permanently enable 这种行为,您应该改为设置类别的 defaultToSpeaker 选项。如果没有使用耳机等其他配件,设置此选项将始终路由到扬声器而不是接收器。

Swift 5.x 上面的代码看起来像 -

let audioSession = AVAudioSession.sharedInstance()
do {
  try audioSession.setCategory(.playAndRecord)
  try audioSession.setMode(.default)
  try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
  try audioSession.overrideOutputAudioPort(.speaker)
} catch {
  debugPrint("Enable to start audio engine")
  return
}

通过将模式设置为 measurement,它负责最小化输入和输出信号的 system-supplied signal 处理量。

try audioSession.setMode(.measurement)

通过评论此模式并使用 default 模式负责 permanently enablingbuilt-in speakermicrophone 的音频路由。

感谢@McDonal_11的回答。希望这有助于理解技术细节。