Model Collapse
Model collapse describes the performance degradation of machine learning models, particularly large language models (LLMs) and generative models, when trained on data generated by previous iterations of the same or similar models. Current research focuses on understanding the causes of this phenomenon, including the impact of synthetic data proportions, model architecture (e.g., transformers, diffusion models), and training methods (e.g., self-supervised learning, reinforcement learning from human feedback). Addressing model collapse is crucial for ensuring the reliability and safety of increasingly prevalent AI systems, as it impacts both the accuracy and fairness of model outputs across various applications.
Papers
November 1, 2024
October 30, 2024
October 22, 2024
October 16, 2024
October 12, 2024
October 7, 2024
September 18, 2024
July 15, 2024
June 17, 2024
June 11, 2024
April 7, 2024
April 2, 2024
April 1, 2024
March 12, 2024
March 11, 2024
February 15, 2024
February 12, 2024
February 10, 2024
June 26, 2023