@objc public class AppleWakewordRecognizer : NSObject
extension AppleWakewordRecognizer: SpeechProcessor
This pipeline component uses the Apple
SFSpeech API to stream audio samples for wakeword recognition.
Once speech pipeline coordination via
startStreaming is received, the recognizer begins streaming buffered frames to the Apple ASR API for recognition. Upon wakeword or wakephrase recognition, the pipeline activation event is triggered and the recognizer completes the API request and awaits another coordination event. Once speech pipeline coordination via
stopStreaming is received, the recognizer completes the API request and awaits another coordination event.
Initializes a AppleWakewordRecognizer instance.
A recognizer is initialized by, and receives
stopStreamingevents from, an instance of
The AppleWakewordRecognizer receives audio data frames to
processfrom a tap into the system
Configuration for the recognizer.
Global state for the speech pipeline.
Triggered by the speech pipeline, instructing the recognizer to begin streaming and processing audio.
@objc public func startStreaming()
Triggered by the speech pipeline, instructing the recognizer to stop streaming audio and complete processing.
@objc public func stopStreaming()
Receives a frame of audio samples for processing. Interface between the
AudioControllercomponents. Processes audio in an async thread.
NoteProcesses audio in an async thread.
RemarkThe Apple Wakeword Recognizer hooks up directly to its own audio tap for processing audio frames. When the
process, it checks to see if the pipeline has detected speech, and if so kicks off its own VAD and wakeword recognizer independently of any other components in the speech pipeline.
@objc public func process(_ frame: Data)
Frame of audio samples.