Fairness Aware
Fairness-aware research focuses on mitigating bias and discrimination in machine learning models and human-robot interactions, aiming to ensure equitable outcomes across different demographic groups. Current research explores various fairness metrics and algorithms, including adaptations of Naive Bayes, Bayesian neural networks, and contrastive learning methods, often within the context of specific applications like recommendation systems, healthcare, and natural language processing. This work is crucial for building trustworthy and responsible AI systems, impacting both the development of fairer algorithms and the ethical deployment of AI in sensitive domains.
Papers
August 28, 2023
April 21, 2023
March 16, 2023
February 19, 2023
February 3, 2023
January 15, 2023
October 18, 2022
June 19, 2022
June 7, 2022
May 20, 2022
March 3, 2022