Robustness Gap
The "robustness gap" refers to the discrepancy between the performance of machine learning models under ideal conditions and their performance when faced with real-world challenges like adversarial attacks, distribution shifts, or noisy data. Current research focuses on improving the robustness of various architectures, including variational autoencoders (VAEs), vision transformers (ViTs), and ResNets, often employing techniques like adversarial training and improved linear bounding functions for activation functions. Bridging this gap is crucial for deploying reliable AI systems in safety-critical applications and ensuring fairness across different subgroups, as evidenced by studies revealing robustness disparities based on factors like age, gender, and skin tone. Addressing the robustness gap is therefore a key challenge in advancing the trustworthiness and reliability of machine learning.