Responsible AI
Responsible AI (RAI) focuses on developing and deploying artificial intelligence systems that are ethical, fair, transparent, and accountable. Current research emphasizes mitigating biases in models, particularly large language models (LLMs), improving model explainability and interpretability, and establishing robust frameworks for risk assessment and governance across diverse applications, including healthcare, education, and autonomous systems. This field is crucial for ensuring the safe and beneficial integration of AI into society, impacting both the development of trustworthy AI technologies and the creation of ethical guidelines for their use.
Papers
Control Risk for Potential Misuse of Artificial Intelligence in Science
Jiyan He, Weitao Feng, Yaosen Min, Jingwei Yi, Kunsheng Tang, Shuai Li, Jie Zhang, Kejiang Chen, Wenbo Zhou, Xing Xie, Weiming Zhang, Nenghai Yu, Shuxin Zheng
Open Datasheets: Machine-readable Documentation for Open Datasets and Responsible AI Assessments
Anthony Cintron Roman, Jennifer Wortman Vaughan, Valerie See, Steph Ballard, Jehu Torres, Caleb Robinson, Juan M. Lavista Ferres