Group DIscrimination
Group discrimination in machine learning focuses on identifying and mitigating biases that lead to unfair or unequal outcomes for different demographic groups. Current research emphasizes developing methods to measure discrimination across various model types, including large language models and neural networks, often employing techniques like contrastive learning, attention mechanisms, and causal inference to pinpoint and correct biases. This work is crucial for ensuring fairness and ethical considerations in AI applications, impacting fields ranging from hiring practices and loan applications to healthcare and criminal justice. The ultimate goal is to create more equitable and trustworthy AI systems.
41papers
Papers
February 4, 2025
ASCenD-BDS: Adaptable, Stochastic and Context-aware framework for Detection of Bias, Discrimination and Stereotyping
Rajiv Bahl, Venkatesan N, Parimal Aglawe, Aastha Sarasapalli, Bhavya Kancharla, Chaitanya kolukuluri, Harish Mohite, Japneet Hora, Kiran Kakollu+5Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Angelina Wang, Michelle Phan, Daniel E. Ho, Sanmi Koyejo
February 3, 2025
January 26, 2025
January 14, 2025
November 3, 2024
October 15, 2024