Robust Generalization
Robust generalization in machine learning focuses on developing models that perform reliably not only on the data they were trained on, but also on unseen data, including data with noise or distortions. Current research explores diverse techniques, such as specialized optimizers, ensemble methods that balance sharpness and diversity, and adversarial training approaches that mitigate overfitting, often applied to deep neural networks, graph neural networks, and vision-language models. These advancements are crucial for deploying machine learning models in real-world applications where encountering unexpected data is inevitable, improving the reliability and safety of AI systems across various domains.
Papers
October 11, 2024
October 10, 2024
October 6, 2024
August 17, 2024
August 7, 2024
July 17, 2024
July 11, 2024
July 2, 2024
June 25, 2024
June 14, 2024
June 7, 2024
May 18, 2024
April 11, 2024
April 1, 2024
March 14, 2024
February 18, 2024
February 14, 2024
February 6, 2024
January 26, 2024