Common Bias
Common biases in machine learning models, particularly large language models (LLMs), are a significant area of research focusing on identifying and mitigating systematic errors in model outputs stemming from skewed training data. Current efforts concentrate on detecting biases related to geography, socioeconomics, demographics, and even seemingly innocuous factors like JPEG compression in image datasets, employing techniques like data augmentation, prompt engineering, and contrastive learning to improve model robustness. Understanding and addressing these biases is crucial for ensuring fairness, accuracy, and trustworthiness in AI systems across diverse applications, from news recommendation to human-robot interaction.
Papers
October 17, 2024
October 13, 2024
October 8, 2024
October 3, 2024
July 18, 2024
April 10, 2024
April 3, 2024
March 26, 2024
February 5, 2024
November 11, 2023
September 27, 2023
May 30, 2023
May 25, 2023
November 4, 2022
July 18, 2022