AppleWakewordRecognizer

@objc
public class AppleWakewordRecognizer : NSObject
extension AppleWakewordRecognizer: SpeechProcessor

This pipeline component uses the Apple SFSpeech API to stream audio samples for wakeword recognition.

Once speech pipeline coordination via startStreaming is received, the recognizer begins streaming buffered frames to the Apple ASR API for recognition. Upon wakeword or wakephrase recognition, the pipeline activation event is triggered and the recognizer completes the API request and awaits another coordination event. Once speech pipeline coordination via stopStreaming is received, the recognizer completes the API request and awaits another coordination event.

Public properties

NSObject methods

  • Initializes a AppleWakewordRecognizer instance.

    A recognizer is initialized by, and receives startStreaming and stopStreaming events from, an instance of SpeechPipeline.

    The AppleWakewordRecognizer receives audio data frames to process from a tap into the system AudioEngine.

    Declaration

    Swift

    @objc
    public init(_ configuration: SpeechConfiguration, context: SpeechContext)

    Parameters

    configuration

    Configuration for the recognizer.

    context

    Global state for the speech pipeline.

SpeechProcessor implementation

  • Triggered by the speech pipeline, instructing the recognizer to begin streaming and processing audio.

    Declaration

    Swift

    @objc
    public func startStreaming()
  • Triggered by the speech pipeline, instructing the recognizer to stop streaming audio and complete processing.

    Declaration

    Swift

    @objc
    public func stopStreaming()
  • Receives a frame of audio samples for processing. Interface between the SpeechProcessor and AudioController components. Processes audio in an async thread.

    Note

    Processes audio in an async thread.

    Remark

    The Apple Wakeword Recognizer hooks up directly to its own audio tap for processing audio frames. When the AudioController calls this process, it checks to see if the pipeline has detected speech, and if so kicks off its own VAD and wakeword recognizer independently of any other components in the speech pipeline.

    Declaration

    Swift

    @objc
    public func process(_ frame: Data)

    Parameters

    frame

    Frame of audio samples.