Trustworthy Artificial Intelligence
Trustworthy Artificial Intelligence (TAI) focuses on developing AI systems that are reliable, safe, and aligned with human values, addressing concerns about bias, transparency, and security. Current research emphasizes techniques like differential privacy for data protection, explainable AI methods (including logic-based approaches and counterfactual explanations) to enhance understanding, and robust model architectures that mitigate vulnerabilities to adversarial attacks and distribution shifts. The development of TAI is crucial for responsible AI deployment across various sectors, impacting not only the scientific community through improved methodologies but also practical applications by fostering greater user trust and ensuring ethical and equitable outcomes.
Papers
First Analysis of the EU Artifical Intelligence Act: Towards a Global Standard for Trustworthy AI?
Marion Ho-Dac (UA, CDEP)
Deceptive AI systems that give explanations are more convincing than honest AI systems and can amplify belief in misinformation
Valdemar Danry, Pat Pataranutaporn, Matthew Groh, Ziv Epstein, Pattie Maes