Bias Variance
The bias-variance trade-off describes the inherent tension in machine learning models between accurately representing the underlying data (low bias) and avoiding overfitting to noise (low variance). Current research focuses on understanding and mitigating this trade-off in various contexts, including overparameterized models, reinforcement learning, and robust estimation, often employing techniques like regularization, ensemble methods, and adaptive algorithms to optimize model complexity and achieve better generalization. This research is crucial for improving the reliability and performance of machine learning models across diverse applications, from recommendation systems and automated vehicles to medical diagnosis and scientific discovery.
Papers
Multiclass learning with margin: exponential rates with no bias-variance trade-off
Stefano Vigogna, Giacomo Meanti, Ernesto De Vito, Lorenzo Rosasco
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
Haruka Kiyohara, Yuta Saito, Tatsuya Matsuhiro, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto