Trustworthy Framework
Trustworthy frameworks are being developed to address the limitations of AI systems, particularly concerning reliability, explainability, and fairness across diverse applications. Current research focuses on integrating techniques like self-supervised learning, LLM-friendly knowledge representations (e.g., Condition Graphs), and dual-system architectures combining human expertise with automated reasoning to enhance model transparency and generalizability. These advancements aim to improve the trustworthiness of AI in critical domains such as medical image analysis, fake news detection, and data-driven decision-making, ultimately fostering greater confidence and responsible deployment of AI technologies.
Papers
October 9, 2024
June 27, 2024
March 23, 2024
February 12, 2024
August 23, 2023
December 6, 2022
November 25, 2021