Average Approximation
Average approximation focuses on efficiently representing complex functions or data using simpler models, aiming to balance accuracy with computational tractability. Current research explores this through various approaches, including low-rank matrix approximations for efficient parameter estimation in large language models and neural networks, and the development of novel algorithms like adaptive proximal gradient methods for optimization under relaxed smoothness assumptions. These advancements have significant implications for diverse fields, improving the efficiency and scalability of machine learning algorithms, enhancing the interpretability of complex models, and enabling real-time applications in areas like robotics and control systems.
Papers
Approximate, Adapt, Anonymize (3A): a Framework for Privacy Preserving Training Data Release for Machine Learning
Tamas Madl, Weijie Xu, Olivia Choudhury, Matthew Howard
Heuristic Algorithms for the Approximation of Mutual Coherence
Gregor Betz, Vera Chekan, Tamara Mchedlidze
Approximate information for efficient exploration-exploitation strategies
Alex Barbier-Chebbah, Christian L. Vestergaard, Jean-Baptiste Masson