Trust Inference

Trust inference research aims to computationally model and predict trust, encompassing both human-human, human-machine, and machine-machine interactions. Current research focuses on developing robust algorithms, including Bayesian methods, graph neural networks, and adaptive trust models, to assess trustworthiness in diverse contexts like multi-agent systems, social networks, and machine learning models. This work is crucial for enhancing the security and reliability of autonomous systems, improving decision-making in complex environments, and mitigating risks associated with increasingly prevalent AI technologies. The ultimate goal is to build more trustworthy and dependable systems across various applications.

Papers