Responsible AI
Responsible AI (RAI) focuses on developing and deploying artificial intelligence systems that are ethical, fair, transparent, and accountable. Current research emphasizes mitigating biases in models, particularly large language models (LLMs), improving model explainability and interpretability, and establishing robust frameworks for risk assessment and governance across diverse applications, including healthcare, education, and autonomous systems. This field is crucial for ensuring the safe and beneficial integration of AI into society, impacting both the development of trustworthy AI technologies and the creation of ethical guidelines for their use.
Papers
Responsible AI for Test Equity and Quality: The Duolingo English Test as a Case Study
Jill Burstein, Geoffrey T. LaFlair, Kevin Yancey, Alina A. von Davier, Ravit Dotan
Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems
Farzaneh Dehghani, Mahsa Dibaji, Fahim Anzum, Lily Dey, Alican Basdemir, Sayeh Bayat, Jean-Christophe Boucher, Steve Drew, Sarah Elaine Eaton, Richard Frayne, Gouri Ginde, Ashley Harris, Yani Ioannou, Catherine Lebel, John Lysack, Leslie Salgado Arzuaga, Emma Stanley, Roberto Souza, Ronnie Souza, Lana Wells, Tyler Williamson, Matthias Wilms, Zaman Wahid, Mark Ungrin, Marina Gavrilova, Mariana Bento