Expectation Maximization
Expectation-Maximization (EM) is an iterative algorithm used to find maximum likelihood estimates of parameters in statistical models with latent variables. Current research focuses on improving EM's convergence speed and robustness, particularly for high-dimensional data and complex models like Gaussian mixtures, softmax mixtures, and those used in deep learning (e.g., transformers and diffusion models). These advancements are impacting diverse fields, including federated learning, blind signal processing, and medical imaging, by enabling more efficient and accurate parameter estimation in challenging scenarios with incomplete or noisy data. The development of theoretically grounded variants and efficient implementations continues to be a major focus.
Papers
The hybrid approach -- Convolutional Neural Networks and Expectation Maximization Algorithm -- for Tomographic Reconstruction of Hyperspectral Images
Mads J. Ahlebæk, Mads S. Peters, Wei-Chih Huang, Mads T. Frandsen, René L. Eriksen, Bjarke Jørgensen
Improvements to Supervised EM Learning of Shared Kernel Models by Feature Space Partitioning
Graham W. Pulford