Shortcut Pattern Avoidance Loss
Shortcut pattern avoidance loss focuses on mitigating the tendency of machine learning models, particularly deep neural networks and large language models, to rely on spurious correlations (shortcuts) in data rather than learning genuine underlying patterns. Current research investigates various methods to detect and avoid these shortcuts, including techniques based on adversarial training, mixture-of-experts models, variational autoencoders, and topological data analysis, aiming to improve model robustness and generalization. Successfully addressing shortcut learning is crucial for enhancing the reliability and trustworthiness of AI systems across diverse applications, from medical image analysis to natural language understanding.
Papers
October 16, 2024
September 26, 2024
June 17, 2024
May 25, 2024
March 12, 2024
February 17, 2024
December 21, 2023
October 24, 2023
September 17, 2023
June 5, 2023
May 26, 2023
February 8, 2023
December 9, 2022
November 29, 2022
November 24, 2022
November 8, 2022
November 2, 2022
October 28, 2022
October 20, 2022
October 17, 2022