微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

更改视图会导致音频引擎出现故障

如何解决更改视图会导致音频引擎出现故障

我正在制作一个应用,它使用音频引擎和听写来解释用户所说的内容我有两个视图可以在使用类 UINavigationController 下的 pushViewController 方法之间进行更改:主视图控制器(称为“ViewController”)和图形视图控制器(称为“GraphViewController”)。当我更改视图并从 GraphViewController 返回到 ViewController 时,音频引擎开始以不合理的速度(约 10 毫秒)完成缓冲区并拒绝录制任何声音。

注意:音频引擎设置在我的 ViewController 类中,我的应用程序的主视图。

这里有一些错误片段可以更好地解释我的问题:

authorized
2021-03-28 01:14:29.631330-0400 DictoCounter[24060:4935105] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x6000032fb1e0> F8BB1C28-BAE8-11D6-9C31-00039315CD46
2021-03-28 01:14:29.797620-0400 DictoCounter[24060:4935239] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2
Text: Hello
Text: Hello hello
Text: Hello hello hello
Text: Hello hello hello
Text: Hello hello hello
Text: Hello hello hello this
Text: Hello hello hello this is
Text: Hello hello hello this is working
Text: Hello hello hello this is working normally
Text: Hello hello hello this is working normally

当应用启动时,主视图控制器 (ViewController) 加载并且音频引擎开始录制。一切正常,缓冲区每 30 秒填充一次。

当我导航到 GraphViewController(通过使用 ViewController 上的按钮)时,音频引擎继续正常缓冲。但是,当我使用视图左上角的“

2021-03-28 01:15:10.643091-0400 DictoCounter[24060:4935105] Words successfully saved.
2021-03-28 01:15:10.644663-0400 DictoCounter[24060:4936029] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2
2021-03-28 01:15:10.654050-0400 DictoCounter[24060:4935938] [Utility] +[AFAggregator logDictationFailedWithError:] Error Domain=kAFAssistantErrorDomain Code=209 "(null)"
completed buffer
2021-03-28 01:15:10.674991-0400 DictoCounter[24060:4935105] Words successfully saved.
2021-03-28 01:15:10.676519-0400 DictoCounter[24060:4936031] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2
2021-03-28 01:15:10.683803-0400 DictoCounter[24060:4935215] [Utility] +[AFAggregator logDictationFailedWithError:] Error Domain=kAFAssistantErrorDomain Code=209 "(null)"
completed buffer
2021-03-28 01:15:10.706812-0400 DictoCounter[24060:4935105] Words successfully saved.
2021-03-28 01:15:10.708591-0400 DictoCounter[24060:4936032] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2
2021-03-28 01:15:10.718311-0400 DictoCounter[24060:4935936] [Utility] +[AFAggregator logDictationFailedWithError:] Error Domain=kAFAssistantErrorDomain Code=209 "(null)"
completed buffer
2021-03-28 01:15:10.741680-0400 DictoCounter[24060:4935105] Words successfully saved.
2021-03-28 01:15:10.743231-0400 DictoCounter[24060:4936033] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2
2021-03-28 01:15:10.754336-0400 DictoCounter[24060:4935939] [Utility] +[AFAggregator logDictationFailedWithError:] Error Domain=kAFAssistantErrorDomain Code=209 "(null)"
completed buffer
2021-03-28 01:15:10.774254-0400 DictoCounter[24060:4935105] Words successfully saved.
2021-03-28 01:15:10.775783-0400 DictoCounter[24060:4936034] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2
2021-03-28 01:15:10.787332-0400 DictoCounter[24060:4935938] [Utility] +[AFAggregator logDictationFailedWithError:] Error Domain=kAFAssistantErrorDomain Code=209 "(null)"
completed buffer
2021-03-28 01:15:10.814357-0400 DictoCounter[24060:4935105] Words successfully saved.
2021-03-28 01:15:10.815785-0400 DictoCounter[24060:4936035] [aurioc] 323: Unable to join I/O thread to workgroup ((null)): 2

这显然是一个问题,因为如果音频引擎仅记录短时间突发,则缓冲区无法接收任何音频输入。我在其他堆栈溢出帖子以及 iOS 论坛中搜索过此错误,但似乎没有人遇到与我相同的更改视图问题。我将在下面粘贴相关代码

//AUdio ENGINE- inside ViewController
    func recognizeAudioStream() {
        let speechRecognizer = SFSpeechRecognizer()
        
        //performs speech recognition on live audio; as audio is captured,call append
        //to request object,call endAudio() to end speech recognition
        var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
        
        //determines & edits state of speech recognition task (end,start,cancel,etc)
        var recognitionTask: SFSpeechRecognitionTask?
        
        let audioEngine = AVAudioEngine()
        
        
        func startRecording() throws{
            
            //cancel prevIoUs audio task
            recognitionTask?.cancel()
            recognitionTask = nil
            
            //get info from microphone
            let audioSession = AVAudioSession.sharedInstance()
            try audioSession.setCategory(.record,mode: .measurement,options: .duckOthers)
            try audioSession.setActive(true,options: .notifyOthersOnDeactivation)
            
            let inputNode = audioEngine.inputNode
            
            //audio buffer; takes a continuous input of audio and recognizes speech
            recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
            //allows device to print results of your speech before you're done talking
            recognitionRequest?.shouldReportPartialResults = true
            
            
            recognitionTask = speechRecognizer!.recognitionTask(with: recognitionRequest!) {result,error in
                
                var isFinal = false
                
                if let result = result{ //if we can let result be the nonoptional version of result,then
                    isFinal = result.isFinal
                    print("Text: \(result.bestTranscription.formattedString)")
                    
                }
                
                if error != nil || result!.isFinal{ //if an error occurs or we're done speaking
                    
                    audioEngine.stop()
                    inputNode.removeTap(onBus: 0)
                    
                    recognitionTask = nil
                    recognitionRequest = nil

                    let bufferText = result?.bestTranscription.formattedString.components(separatedBy: (" "))
                    print("completed buffer")
                    
                    self.addToDictionary(wordNames: bufferText)
                    
                    sortedWordsDict = wordsDict.sorted {
                        return $0.value > $1.value
                    }
                    
                    allWords = [] //reset the array to reload words
                    
                    for (wordKey,countValue) in sortedWordsDict{
                        allWords.append(Word(word: wordKey,count: countValue))
                    }
                    
                    self.saveWords() //saves allWords list to file
                    
                    tempWords = Array(allWords.prefix(numWords)) //only show first 50 words
                    
                    wordsColl.reloadData()
//
                    do{
                        try startRecording()
                    }
                    catch{
                        print(error)
                    }
                    
                }
            
            }
            
            //configure microphone; let the recording format match with that of the bus we are using
            let recordingFormat = inputNode.outputFormat(forBus: 0)
            
            //contents of buffer will be dumped into recognitionRequest and into result,where
            //it will then be transcribed and printed out
            //1024 bytes = dumping "limit": once buffer fills to 1024 bytes,it is appended to recognitionRequest
            inputNode.installTap(onBus: 0,bufferSize: 1024,format: recordingFormat) { (buffer: AVAudioPCMBuffer,when: AVAudioTime) in
                recognitionRequest?.append(buffer)
            }
            
            audioEngine.prepare()
            try audioEngine.start()
        }
        
        do{
            try startRecording()
        }
        catch{
            print(error)
        }
        
        
    }

底部的 do-catch 语句确保麦克风不断记录用户的声音。这对我有用(在这种情况下,更改视图似乎会使音频引擎出现故障)。我还没有找到任何更好的方法来实现这个继续录制功能,所以如果有更有效的方法,请告诉我!继续,

//INSIDE ViewController
 @objc func goToHome(){
        let homeView = ViewController()
        navigationController?.pushViewController(homeView,animated: true)
    }
    
    @objc func goToGraph(){
        print("pressed")
        let graphView = GraphViewController()
        navigationController?.pushViewController(graphView,animated: true)
    }

这些函数控制视图的推送。最后,这是我的 SceneDelegate.swift 相关代码

//INSIDE SceneDelegate.swift
func scene(_ scene: UIScene,willConnectTo session: UIScenesession,options connectionoptions: UIScene.Connectionoptions) {
        // Use this method to optionally configure and attach the UIWindow `window` to the provided UIWindowScene `scene`.
        // If using a storyboard,the `window` property will automatically be initialized and attached to the scene.
        // This delegate does not imply the connecting scene or session are new (see `application:configurationForConnectingScenesession` instead).
        if let windowScene = scene as? UIWindowScene{
            let window = UIWindow(windowScene: windowScene)
            
            let viewController = ViewController()
            window.rootViewController = UINavigationController(rootViewController: viewController)
            self.window = window
            window.makeKeyAndVisible()
        }
    }

主类ViewController中的这段代码调用了recognizeAudioStream():

    override func viewDidAppear(_ animated: Bool) {
        wordsColl.reloadData()
        let speechRecognizer = SFSpeechRecognizer()
        requestDictAccess();
        
        if speechRecognizer!.isAvailable { //if the user has granted permission
            speechRecognizer?.supportsOnDeviceRecognition = true //for offline data
            
            recognizeAudioStream()
        }
        
    }

感谢您阅读这篇文章!如果您需要更多信息,请告诉我。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。