Fair Decision
Fair decision-making in artificial intelligence focuses on developing algorithms and systems that avoid bias and discrimination against protected groups, aiming for equitable outcomes across different demographics. Current research emphasizes quantifying and mitigating uncertainty in fairness metrics, exploring long-term fairness in sequential decisions, and developing methods like counterfactual fairness and multi-marginal Wasserstein barycenters to achieve fairness guarantees. This field is crucial for ensuring ethical and responsible use of AI in high-stakes applications like loan approvals, hiring, and criminal justice, impacting both the development of fairer algorithms and the broader societal implications of AI.
Papers
August 7, 2022
May 10, 2022
April 18, 2022
March 16, 2022
February 25, 2022