SpokestackSpeechRecognizer
@available(iOS 13.0, *)
@objc
public class SpokestackSpeechRecognizer : NSObject
extension SpokestackSpeechRecognizer: SpeechProcessor
This pipeline component streams audio frames to Spokestack’s cloud-based ASR for speech recognition.
Upon the pipeline being activated, the recognizer sends all audio frames to the Spokeatck ASR via a websocket connection. Once the pipeline is deactivated or the activation max is reached, a final empty audio frame is sent which triggers the final recognition transcript. That is passed to the SpeechEventListener
delegates via the didRecognize
event with the updated global speech context (including the final transcript and confidence).
-
Configuration for the recognizer.
Declaration
Swift
public var configuration: SpeechConfiguration
-
Global state for the speech pipeline.
Declaration
Swift
public var context: SpeechContext
-
Initializes an instance of SpokestackSpeechRecognizer.
Declaration
Swift
@objc public init(_ configuration: SpeechConfiguration, context: SpeechContext)
Parameters
configuration
Configuration for the recognizer.
context
Global state for the speech pipeline.
-
Triggered by the speech pipeline, instructing the recognizer to begin streaming and processing audio.
Declaration
Swift
public func startStreaming()
-
Triggered by the speech pipeline, instructing the recognizer to stop streaming audio and complete processing.
Declaration
Swift
public func stopStreaming()
-
Receives a frame of audio samples for processing. Interface between the
SpeechProcessor
andAudioController
components.Declaration
Swift
public func process(_ frame: Data)
Parameters
frame
Frame of audio samples.