Trustworthy Computing

Trustworthy computing focuses on developing and deploying computational systems, particularly those involving AI models like Large Language Models (LLMs), that are reliable, safe, and ethically sound. Current research emphasizes techniques like differential privacy in federated learning to protect data privacy, methods for evaluating model trustworthiness (e.g., TrustScore) and mitigating biases, and the development of robust software engineering practices for AI-driven systems ("FMware"). This field is crucial for building public confidence in AI and ensuring responsible innovation across diverse applications, from autonomous vehicles to healthcare.

Papers