Trustworthy Computing
Trustworthy computing focuses on developing and deploying computational systems, particularly those involving AI models like Large Language Models (LLMs), that are reliable, safe, and ethically sound. Current research emphasizes techniques like differential privacy in federated learning to protect data privacy, methods for evaluating model trustworthiness (e.g., TrustScore) and mitigating biases, and the development of robust software engineering practices for AI-driven systems ("FMware"). This field is crucial for building public confidence in AI and ensuring responsible innovation across diverse applications, from autonomous vehicles to healthcare.
Papers
October 9, 2024
April 25, 2024
April 4, 2024
February 25, 2024
February 19, 2024
January 10, 2024
October 3, 2023
July 31, 2023
May 26, 2023
May 5, 2022
April 15, 2022
December 14, 2021