Fairness Research
Fairness research in artificial intelligence aims to mitigate algorithmic bias, ensuring AI systems treat all individuals equitably regardless of sensitive attributes like race or gender. Current research focuses on developing and evaluating methods for detecting and reducing bias in various model architectures, including graph neural networks and large language models, often employing techniques like post-processing and knowledge distillation. This work is crucial for building trustworthy and responsible AI systems, impacting both the scientific understanding of bias and the ethical deployment of AI in high-stakes applications like healthcare, finance, and criminal justice.
Papers
May 26, 2022
May 11, 2022
March 30, 2022