Group DIscrimination
Group discrimination in machine learning focuses on identifying and mitigating biases that lead to unfair or unequal outcomes for different demographic groups. Current research emphasizes developing methods to measure discrimination across various model types, including large language models and neural networks, often employing techniques like contrastive learning, attention mechanisms, and causal inference to pinpoint and correct biases. This work is crucial for ensuring fairness and ethical considerations in AI applications, impacting fields ranging from hiring practices and loan applications to healthcare and criminal justice. The ultimate goal is to create more equitable and trustworthy AI systems.
Papers
August 4, 2023
July 25, 2023
June 29, 2023
June 22, 2023
June 8, 2023
March 16, 2023
February 23, 2023
February 7, 2023
December 19, 2022
November 9, 2022
October 16, 2022
October 10, 2022
September 2, 2022
August 24, 2022
August 16, 2022
June 30, 2022
June 3, 2022
April 6, 2022
February 4, 2022