Societal Bias
Societal biases, reflecting prejudices present in training data, are a significant concern in large language models (LLMs) and other AI systems, particularly impacting fairness and equity in applications ranging from recruitment to healthcare. Current research focuses on detecting and mitigating these biases, employing techniques like adversarial training, data augmentation, and prompt engineering across various model architectures including BERT and LLMs, with a growing emphasis on multilingual and culturally-sensitive datasets. This work is crucial for ensuring responsible AI development and deployment, preventing the amplification of harmful stereotypes and promoting more equitable outcomes across diverse populations.
Papers
October 20, 2024
October 14, 2024
October 3, 2024
September 29, 2024
September 18, 2024
August 20, 2024
August 19, 2024
July 22, 2024
July 4, 2024
June 9, 2024
March 29, 2024
March 26, 2024
March 21, 2024
March 16, 2024
March 15, 2024
February 28, 2024
February 21, 2024
February 1, 2024
December 3, 2023