Fairness Aware
Fairness-aware research focuses on mitigating bias and discrimination in machine learning models and human-robot interactions, aiming to ensure equitable outcomes across different demographic groups. Current research explores various fairness metrics and algorithms, including adaptations of Naive Bayes, Bayesian neural networks, and contrastive learning methods, often within the context of specific applications like recommendation systems, healthcare, and natural language processing. This work is crucial for building trustworthy and responsible AI systems, impacting both the development of fairer algorithms and the ethical deployment of AI in sensitive domains.
Papers
October 22, 2024
October 17, 2024
October 8, 2024
September 11, 2024
August 29, 2024
June 20, 2024
June 5, 2024
June 1, 2024
May 28, 2024
May 15, 2024
April 18, 2024
March 27, 2024
March 8, 2024
February 28, 2024
February 20, 2024
February 19, 2024
February 5, 2024
February 1, 2024
January 27, 2024