Training Robustness
Training robustness in machine learning focuses on developing models that maintain high performance despite various challenges, such as noisy data, adversarial attacks, and distribution shifts. Current research emphasizes techniques like symmetric reinforcement learning losses, adaptive optimization methods (e.g., rapid network adaptation), and data augmentation strategies (e.g., mixup extensions) to improve model stability and generalization across diverse tasks and datasets. These advancements are crucial for building reliable and trustworthy AI systems, impacting fields ranging from natural language processing to computer vision by mitigating vulnerabilities and enhancing real-world applicability.
Papers
August 19, 2024
May 27, 2024
May 6, 2024
April 24, 2024
December 7, 2023
September 27, 2023
April 7, 2023
October 25, 2022
June 27, 2022