Contamination Detection
Contamination detection focuses on identifying instances where training data for machine learning models, particularly large language models (LLMs), includes elements from evaluation datasets, leading to artificially inflated performance. Current research emphasizes developing robust statistical methods and novel algorithms, such as paired confidence significance testing and generalization-based approaches, to detect this contamination, often focusing on the distribution of model outputs or performance discrepancies across related benchmarks. These efforts are crucial for ensuring the trustworthiness and reliability of LLM evaluations and for improving the generalizability of these powerful models to real-world applications.
Papers
November 6, 2024
October 22, 2024
October 19, 2024
October 4, 2024
September 16, 2024
June 26, 2024
June 19, 2024
May 25, 2024
March 31, 2024
February 24, 2024
February 5, 2024
October 5, 2023
September 15, 2023
August 7, 2023
October 19, 2022
October 11, 2022
July 26, 2022