Big Gain
"Big Gain" research broadly focuses on improving the performance and efficiency of machine learning models, particularly in addressing challenges like fairness, interpretability, and robustness. Current efforts concentrate on developing novel algorithms and model architectures (e.g., contextual bandits, ResNet variations, and transformer-based models) to achieve these gains, often employing techniques like self-supervised learning, knowledge distillation, and robust loss functions. These advancements have significant implications for various applications, including personalized recommendations, medical AI, and industrial automation, by enhancing model accuracy, reliability, and explainability.
Papers
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
Yuhang Cai, Jingfeng Wu, Song Mei, Michael Lindsey, Peter L. Bartlett
Sources of Gain: Decomposing Performance in Conditional Average Dose Response Estimation
Christopher Bockel-Rickermann, Toon Vanderschueren, Tim Verdonck, Wouter Verbeke