Model Failure
Model failure in machine learning, particularly concerning large language models (LLMs) and deep learning systems, focuses on identifying, understanding, and mitigating instances where models deviate from expected performance. Current research emphasizes developing methods for detecting systematic biases and failures, often leveraging techniques like prompt engineering, uncertainty quantification, and generative models to create targeted datasets for improving model robustness. This work is crucial for ensuring the reliability and fairness of AI systems across diverse applications, ranging from healthcare and education to safety-critical domains like aviation, where model failures can have significant consequences.
Papers
October 3, 2024
March 18, 2024
March 12, 2024
December 9, 2023
June 21, 2023
May 28, 2023
April 12, 2023
February 15, 2023
November 17, 2022
October 31, 2022
October 27, 2022
July 11, 2022