Fair Machine Learning
Fair machine learning aims to develop algorithms that make unbiased predictions, avoiding discrimination against sensitive attributes like race or gender. Current research focuses on mitigating bias through various techniques, including modifying model architectures (e.g., using mixed-effects models or incorporating fairness penalties into neural networks), developing fairness-aware data augmentation methods, and employing active learning strategies to improve data representation. This field is crucial for ensuring equitable outcomes in applications ranging from healthcare and loan applications to criminal justice, promoting responsible and ethical use of AI.
Papers
Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking
Zichong Wang, Yang Zhou, Meikang Qiu, Israat Haque, Laura Brown, Yi He, Jianwu Wang, David Lo, Wenbin Zhang
Preventing Discriminatory Decision-making in Evolving Data Streams
Zichong Wang, Nripsuta Saxena, Tongjia Yu, Sneha Karki, Tyler Zetty, Israat Haque, Shan Zhou, Dukka Kc, Ian Stockwell, Albert Bifet, Wenbin Zhang