Simplicity Bias
Simplicity bias, the tendency of machine learning models to favor simpler solutions over more complex, potentially more accurate ones, is a significant area of research. Current investigations focus on understanding this bias in various architectures, including neural networks (especially two-layer ReLU networks and transformers) and its impact on generalization, robustness, and fairness, often employing techniques like Sharpness-Aware Minimization or regularization strategies to mitigate its effects. Addressing simplicity bias is crucial for improving model performance, particularly in out-of-distribution settings and for preventing the amplification of existing biases in data, leading to more reliable and equitable AI systems.
Papers
November 12, 2024
November 1, 2024
October 25, 2024
October 21, 2024
October 13, 2024
October 3, 2024
September 18, 2024
September 15, 2024
July 23, 2024
July 3, 2024
May 30, 2024
May 27, 2024
March 28, 2024
March 11, 2024
March 4, 2024
February 6, 2024
November 5, 2023
October 16, 2023
October 9, 2023
September 13, 2023