Trustworthy Artificial Intelligence
Trustworthy Artificial Intelligence (TAI) focuses on developing AI systems that are reliable, safe, and aligned with human values, addressing concerns about bias, transparency, and security. Current research emphasizes techniques like differential privacy for data protection, explainable AI methods (including logic-based approaches and counterfactual explanations) to enhance understanding, and robust model architectures that mitigate vulnerabilities to adversarial attacks and distribution shifts. The development of TAI is crucial for responsible AI deployment across various sectors, impacting not only the scientific community through improved methodologies but also practical applications by fostering greater user trust and ensuring ethical and equitable outcomes.
Papers
Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems
Saurabh Mishra, Anand Rao, Ramayya Krishnan, Bilal Ayyub, Amin Aria, Enrico Zio
Building Trustworthy AI: Transparent AI Systems via Large Language Models, Ontologies, and Logical Reasoning (TranspNet)
Fadi Al Machot, Martin Thomas Horsch, Habib Ullah