Trustworthy Artificial Intelligence
Trustworthy Artificial Intelligence (TAI) focuses on developing AI systems that are reliable, safe, and aligned with human values, addressing concerns about bias, transparency, and security. Current research emphasizes techniques like differential privacy for data protection, explainable AI methods (including logic-based approaches and counterfactual explanations) to enhance understanding, and robust model architectures that mitigate vulnerabilities to adversarial attacks and distribution shifts. The development of TAI is crucial for responsible AI deployment across various sectors, impacting not only the scientific community through improved methodologies but also practical applications by fostering greater user trust and ensuring ethical and equitable outcomes.
Papers
Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems
Saurabh Mishra, Anand Rao, Ramayya Krishnan, Bilal Ayyub, Amin Aria, Enrico Zio
Building Trustworthy AI: Transparent AI Systems via Large Language Models, Ontologies, and Logical Reasoning (TranspNet)
Fadi Al Machot, Martin Thomas Horsch, Habib Ullah
Ethical AI Governance: Methods for Evaluating Trustworthy AI
Louise McCormack, Malika Bendechache
Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems
Farzaneh Dehghani, Mahsa Dibaji, Fahim Anzum, Lily Dey, Alican Basdemir, Sayeh Bayat, Jean-Christophe Boucher, Steve Drew, Sarah Elaine Eaton, Richard Frayne, Gouri Ginde, Ashley Harris, Yani Ioannou, Catherine Lebel, John Lysack, Leslie Salgado Arzuaga, Emma Stanley, Roberto Souza, Ronnie Souza, Lana Wells, Tyler Williamson, Matthias Wilms, Zaman Wahid, Mark Ungrin, Marina Gavrilova, Mariana Bento