Trustworthy Artificial Intelligence
Trustworthy Artificial Intelligence (TAI) focuses on developing AI systems that are reliable, safe, and aligned with human values, addressing concerns about bias, transparency, and security. Current research emphasizes techniques like differential privacy for data protection, explainable AI methods (including logic-based approaches and counterfactual explanations) to enhance understanding, and robust model architectures that mitigate vulnerabilities to adversarial attacks and distribution shifts. The development of TAI is crucial for responsible AI deployment across various sectors, impacting not only the scientific community through improved methodologies but also practical applications by fostering greater user trust and ensuring ethical and equitable outcomes.
Papers
Ethical AI Governance: Methods for Evaluating Trustworthy AI
Louise McCormack, Malika Bendechache
Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems
Farzaneh Dehghani, Mahsa Dibaji, Fahim Anzum, Lily Dey, Alican Basdemir, Sayeh Bayat, Jean-Christophe Boucher, Steve Drew, Sarah Elaine Eaton, Richard Frayne, Gouri Ginde, Ashley Harris, Yani Ioannou, Catherine Lebel, John Lysack, Leslie Salgado Arzuaga, Emma Stanley, Roberto Souza, Ronnie Souza, Lana Wells, Tyler Williamson, Matthias Wilms, Zaman Wahid, Mark Ungrin, Marina Gavrilova, Mariana Bento
First Analysis of the EU Artifical Intelligence Act: Towards a Global Standard for Trustworthy AI?
Marion Ho-Dac (UA, CDEP)
Deceptive AI systems that give explanations are more convincing than honest AI systems and can amplify belief in misinformation
Valdemar Danry, Pat Pataranutaporn, Matthew Groh, Ziv Epstein, Pattie Maes