Data Dependent
Data-dependent regularization techniques are increasingly used to improve the performance and robustness of machine learning models, particularly in high-dimensional settings and when dealing with domain shifts or uncertainty quantification. Current research focuses on developing and analyzing data-dependent regularizers, often implemented through neural networks or Bayesian frameworks, to address issues like covariate shift, calibration, and the efficient handling of massive datasets. These methods offer significant potential for enhancing model generalization, reducing overfitting, and providing more reliable uncertainty estimates across diverse applications, from image processing to large language models.
Papers
October 13, 2024
August 29, 2024
June 5, 2024
February 19, 2024
May 31, 2023
May 12, 2023
November 21, 2022
August 8, 2022
January 20, 2022