Transparency Index
Transparency index research aims to quantify and improve the understandability and accountability of AI systems, particularly large language models (LLMs) and autonomous systems. Current efforts focus on developing standardized metrics for evaluating transparency across various AI applications, including explainable AI (XAI) techniques, and analyzing the impact of different levels of transparency on user trust and system performance. This work is crucial for building trust in AI, mitigating biases, and ensuring responsible AI development and deployment across diverse sectors, from finance to healthcare.
Papers
Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice
Andrew Bell, Julia Stoyanovich
Probabilistic Strategy Logic with Degrees of Observability
Chunyan Mu, Nima Motamed, Natasha Alechina, Brian Logan
TDCNet: Transparent Objects Depth Completion with CNN-Transformer Dual-Branch Parallel Network
Xianghui Fan, Chao Ye, Anping Deng, Xiaotian Wu, Mengyang Pan, Hang Yang
Linear Discriminant Analysis in Credit Scoring: A Transparent Hybrid Model Approach
Md Shihab Reza, Monirul Islam Mahmud, Ifti Azad Abeer, Nova Ahmed
A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications
Md. Ariful Islam, M. F. Mridha, Md Abrar Jahin, Nilanjan Dey
Beyond the Numbers: Transparency in Relation Extraction Benchmark Creation and Leaderboards
Varvara Arzt, Allan Hanbury
Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification
Manuel Nunez Martinez, Sonja Schmer-Galunder, Zoey Liu, Sangpil Youm, Chathuri Jayaweera, Bonnie J. Dorr