AppleWakewordRecognizer
@objc
public class AppleWakewordRecognizer : NSObject
extension AppleWakewordRecognizer: SpeechProcessor
This pipeline component uses the Apple SFSpeech API to stream audio samples for wakeword recognition.
Once speech pipeline coordination via startStreaming is received, the recognizer begins streaming buffered frames to the Apple ASR API for recognition. Upon wakeword or wakephrase recognition, the pipeline activation event is triggered and the recognizer completes the API request and awaits another coordination event. Once speech pipeline coordination via stopStreaming is received, the recognizer completes the API request and awaits another coordination event.
-
Configuration for the recognizer.
Declaration
Swift
@objc public var configuration: SpeechConfiguration -
Global state for the speech pipeline.
Declaration
Swift
@objc public var context: SpeechContext
-
Initializes a AppleWakewordRecognizer instance.
A recognizer is initialized by, and receives
startStreamingandstopStreamingevents from, an instance ofSpeechPipeline.The AppleWakewordRecognizer receives audio data frames to
processfrom a tap into the systemAudioEngine.Declaration
Swift
@objc public init(_ configuration: SpeechConfiguration, context: SpeechContext)Parameters
configurationConfiguration for the recognizer.
contextGlobal state for the speech pipeline.
-
Triggered by the speech pipeline, instructing the recognizer to begin streaming and processing audio.
Declaration
Swift
@objc public func startStreaming() -
Triggered by the speech pipeline, instructing the recognizer to stop streaming audio and complete processing.
Declaration
Swift
@objc public func stopStreaming() -
Receives a frame of audio samples for processing. Interface between the
SpeechProcessorandAudioControllercomponents. Processes audio in an async thread.Note
Processes audio in an async thread.Remark
The Apple Wakeword Recognizer hooks up directly to its own audio tap for processing audio frames. When theAudioControllercalls thisprocess, it checks to see if the pipeline has detected speech, and if so kicks off its own VAD and wakeword recognizer independently of any other components in the speech pipeline.Declaration
Swift
@objc public func process(_ frame: Data)Parameters
frameFrame of audio samples.
View on GitHub
AppleWakewordRecognizer Class Reference