Transparency Index
Transparency index research aims to quantify and improve the understandability and accountability of AI systems, particularly large language models (LLMs) and autonomous systems. Current efforts focus on developing standardized metrics for evaluating transparency across various AI applications, including explainable AI (XAI) techniques, and analyzing the impact of different levels of transparency on user trust and system performance. This work is crucial for building trust in AI, mitigating biases, and ensuring responsible AI development and deployment across diverse sectors, from finance to healthcare.
Papers
Addressing the Regulatory Gap: Moving Towards an EU AI Audit Ecosystem Beyond the AIA by Including Civil Society
David Hartmann, José Renato Laranjeira de Pereira, Chiara Streitbörger, Bettina Berendt
Foundation Model Transparency Reports
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang
Science Checker Reloaded: A Bidirectional Paradigm for Transparency and Logical Reasoning
Loïc Rakotoson, Sylvain Massip, Fréjus A. A. Laleye
Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness
David Fernández Llorca, Ronan Hamon, Henrik Junklewitz, Kathrin Grosse, Lars Kunze, Patrick Seiniger, Robert Swaim, Nick Reed, Alexandre Alahi, Emilia Gómez, Ignacio Sánchez, Akos Kriston
The State of Documentation Practices of Third-party Machine Learning Models and Datasets
Ernesto Lang Oreamuno, Rohan Faiyaz Khan, Abdul Ali Bangash, Catherine Stinson, Bram Adams
An Empirical Study on Compliance with Ranking Transparency in the Software Documentation of EU Online Platforms
Francesco Sovrano, Michaël Lognoul, Alberto Bacchelli