Gram Language Model
Gram language models, particularly n-gram models, remain a significant area of research in natural language processing, focusing on improving their efficiency and integration with neural network architectures like transformers and neural transducers. Current research explores optimizing n-gram models for various applications, including speech recognition, handwriting recognition, and authorship verification, often through techniques like shallow or linear fusion with neural language models, and investigating their effectiveness in low-resource settings. These advancements contribute to improved performance in numerous applications by leveraging the strengths of both n-gram and neural approaches, offering a balance between computational efficiency and accuracy.
Papers
LM-assisted keyword biasing with Aho-Corasick algorithm for Transducer-based ASR
Iuliia Thorbecke, Juan Zuluaga-Gomez, Esaú Villatoro-Tello, Andres Carofilis, Shashi Kumar, Petr Motlicek, Karthik Pandia, Aravind Ganapathiraju
Fast Streaming Transducer ASR Prototyping via Knowledge Distillation with Whisper
Iuliia Thorbecke, Juan Zuluaga-Gomez, Esaú Villatoro-Tello, Shashi Kumar, Pradeep Rangappa, Sergio Burdisso, Petr Motlicek, Karthik Pandia, Aravind Ganapathiraju