Majorization Minimization

Majorization-minimization (MM) is a family of optimization algorithms that iteratively solve complex problems by repeatedly minimizing a simpler, locally tight upper bound (the majorizer) of the original objective function. Current research focuses on extending MM's applicability to diverse areas, including deep learning (e.g., predictive coding networks, normalized neural networks), matrix factorization (e.g., nonnegative matrix factorization, binary matrix factorization), and tensor decomposition, often incorporating techniques like ADMM and extrapolation for improved efficiency and convergence guarantees. The widespread adoption of MM across various fields stems from its ability to handle non-convex and high-dimensional problems, offering efficient and often provably convergent solutions for a range of machine learning and signal processing tasks.

Papers