Protected Attribute

Protected attributes, such as race, gender, or age, are sensitive characteristics that should not unfairly influence automated decision-making. Current research focuses on mitigating bias stemming from the use of these attributes in machine learning models, exploring methods like data obfuscation, adversarial training, and the development of fairness-aware algorithms (e.g., adaptations of Naive Bayes and distributionally robust optimization). This work is crucial for ensuring fairness and preventing discrimination in various applications, from loan applications to hiring processes, and is driving the development of new fairness metrics and model evaluation techniques.

Papers