Transparency Index
Transparency index research aims to quantify and improve the understandability and accountability of AI systems, particularly large language models (LLMs) and autonomous systems. Current efforts focus on developing standardized metrics for evaluating transparency across various AI applications, including explainable AI (XAI) techniques, and analyzing the impact of different levels of transparency on user trust and system performance. This work is crucial for building trust in AI, mitigating biases, and ensuring responsible AI development and deployment across diverse sectors, from finance to healthcare.
Papers
FASTER-CE: Fast, Sparse, Transparent, and Robust Counterfactual Explanations
Shubham Sharma, Alan H. Gee, Jette Henderson, Joydeep Ghosh
GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF
Qiyu Dai, Yan Zhu, Yiran Geng, Ciyu Ruan, Jiazhao Zhang, He Wang
EleutherAI: Going Beyond "Open Science" to "Science in the Open"
Jason Phang, Herbie Bradley, Leo Gao, Louis Castricato, Stella Biderman