Trust Inference
Trust inference research aims to computationally model and predict trust, encompassing both human-human, human-machine, and machine-machine interactions. Current research focuses on developing robust algorithms, including Bayesian methods, graph neural networks, and adaptive trust models, to assess trustworthiness in diverse contexts like multi-agent systems, social networks, and machine learning models. This work is crucial for enhancing the security and reliability of autonomous systems, improving decision-making in complex environments, and mitigating risks associated with increasingly prevalent AI technologies. The ultimate goal is to build more trustworthy and dependable systems across various applications.
Papers
November 4, 2024
October 15, 2024
October 7, 2024
July 27, 2024
June 24, 2024
June 7, 2024
May 3, 2024
March 25, 2024
March 18, 2024
January 14, 2024
September 22, 2023
June 20, 2023
June 7, 2023
May 22, 2023
March 7, 2023
January 26, 2023
November 22, 2022
September 25, 2022
September 15, 2022