Robustness Bias
Robustness bias refers to the uneven susceptibility of machine learning models, particularly deep learning models like Transformers and neural networks, to adversarial attacks or noisy inputs across different data subgroups. Current research focuses on identifying and mitigating this bias through methods like dataset diversification, improved model architectures (e.g., RBFormer), and novel robustness metrics that go beyond simple worst-case analysis (e.g., average-case robustness). Understanding and addressing robustness bias is crucial for building trustworthy and fair AI systems, ensuring consistent performance across diverse populations and preventing discriminatory outcomes in real-world applications.
Papers
November 29, 2023
September 23, 2023
July 26, 2023
May 22, 2023
March 13, 2023
February 24, 2023
November 12, 2022
June 7, 2022
May 5, 2022