Finite State Transducer
Finite-state transducers (FSTs) are mathematical models representing input-output mappings, crucial for tasks involving sequence transformations like speech recognition and natural language processing. Current research emphasizes efficient decoding algorithms, particularly for large-scale applications, and explores hybrid models combining FSTs with neural networks to leverage the strengths of both approaches, such as improved accuracy and handling of ambiguous inputs. This work is significant for advancing both theoretical understanding of computation and practical applications, leading to faster, more accurate, and more robust systems in areas like speech recognition, machine translation, and complex event processing.
Papers
Space-Efficient Representation of Entity-centric Query Language Models
Christophe Van Gysel, Mirko Hannemann, Ernest Pusateri, Youssef Oualil, Ilya Oparin
Finstreder: Simple and fast Spoken Language Understanding with Finite State Transducers using modern Speech-to-Text models
Daniel Bermuth, Alexander Poeppel, Wolfgang Reif