Protected Attribute
Protected attributes, such as race, gender, or age, are sensitive characteristics that should not unfairly influence automated decision-making. Current research focuses on mitigating bias stemming from the use of these attributes in machine learning models, exploring methods like data obfuscation, adversarial training, and the development of fairness-aware algorithms (e.g., adaptations of Naive Bayes and distributionally robust optimization). This work is crucial for ensuring fairness and preventing discrimination in various applications, from loan applications to hiring processes, and is driving the development of new fairness metrics and model evaluation techniques.
Papers
October 17, 2024
October 16, 2024
July 2, 2024
July 1, 2024
May 29, 2024
May 2, 2024
March 28, 2024
December 6, 2023
November 21, 2023
July 25, 2023
July 6, 2023
November 30, 2022
October 18, 2022
October 12, 2022
September 2, 2022
May 19, 2022
April 17, 2022
February 23, 2022
January 27, 2022
December 15, 2021