Corruption Robustness
Corruption robustness in machine learning focuses on developing models resilient to various data corruptions, such as noise, blur, and adversarial attacks, improving the reliability of AI systems in real-world scenarios. Current research emphasizes enhancing robustness through data augmentation techniques (like IPMix and PRIME), exploring the effectiveness of different model architectures (including CNNs and Transformers), and employing methods like Hopfield Networks integration or dynamic BatchNorm statistics updates. This field is crucial for deploying reliable AI systems in safety-critical applications, such as autonomous driving and medical diagnosis, where robustness to unexpected data variations is paramount.
Papers
October 10, 2024
August 21, 2024
May 31, 2024
February 29, 2024
October 31, 2023
October 23, 2023
October 7, 2023
July 17, 2023
May 10, 2023
May 9, 2023
March 20, 2023
March 3, 2023
October 12, 2022
June 27, 2022
April 25, 2022
January 28, 2022
December 27, 2021