Hidden Markov Model
Hidden Markov Models (HMMs) are probabilistic models used to analyze sequential data by inferring hidden states from observable outputs. Current research focuses on improving HMM performance and applicability through ensemble methods, coupled models (e.g., combining HMMs with neural networks), and efficient algorithms like Viterbi decoding and variations of the Expectation-Maximization algorithm. HMMs are proving valuable across diverse fields, including natural language processing, finance, bioinformatics, and speech recognition, offering both interpretability and robust performance, particularly in scenarios with limited data or complex dependencies. The development of more efficient algorithms and the integration with other machine learning techniques continue to expand the scope and impact of HMMs.
Papers
HEiMDaL: Highly Efficient Method for Detection and Localization of wake-words
Arnav Kundu, Mohammad Samragh Razlighi, Minsik Cho, Priyanka Padmanabhan, Devang Naik
Hybrid HMM Decoder For Convolutional Codes By Joint Trellis-Like Structure and Channel Prior
Haoyu Li, Xuan Wang, Tong Liu, Dingyi Fang, Baoying Liu