Encode Bias
Encode bias refers to the unintentional incorporation of societal prejudices into machine learning models, leading to unfair or discriminatory outcomes. Current research focuses on identifying and mitigating these biases across various model types, including transformers, diffusion models, and collaborative filtering algorithms, employing techniques like adversarial training, iterative gradient-based projection, and contrastive learning to improve fairness while preserving model performance. This work is crucial for ensuring the ethical and responsible development of AI systems, impacting fields ranging from image generation and natural language processing to recommendation systems and computer vision.
Papers
November 7, 2024
October 23, 2024
June 9, 2024
February 21, 2024
January 10, 2024
November 25, 2023
October 6, 2023
June 26, 2023
May 17, 2023
December 11, 2022
October 11, 2022
October 10, 2022
June 9, 2022