Fairness Aware

Fairness-aware research focuses on mitigating bias and discrimination in machine learning models and human-robot interactions, aiming to ensure equitable outcomes across different demographic groups. Current research explores various fairness metrics and algorithms, including adaptations of Naive Bayes, Bayesian neural networks, and contrastive learning methods, often within the context of specific applications like recommendation systems, healthcare, and natural language processing. This work is crucial for building trustworthy and responsible AI systems, impacting both the development of fairer algorithms and the ethical deployment of AI in sensitive domains.

Papers