Belief State
Belief state research focuses on modeling and understanding how agents (humans or AI) form, update, and utilize beliefs about the world, often in complex, partially observable environments. Current research emphasizes developing algorithms and models, such as those based on Bayesian networks, deep learning, and belief-map assisted training, to accurately represent and reason with belief states, particularly in multi-agent systems and human-AI collaboration. This work is significant for improving AI decision-making in uncertain situations, enhancing human-AI teaming, and providing insights into human cognition and social dynamics, including the spread of misinformation.
Papers
eSPARQL: Representing and Reconciling Agnostic and Atheistic Beliefs in RDF-star Knowledge Graphs
Xinyi Pan, Daniel Hernández, Philipp Seifer, Ralf Lämmel, Steffen Staab
Deceptive AI systems that give explanations are more convincing than honest AI systems and can amplify belief in misinformation
Valdemar Danry, Pat Pataranutaporn, Matthew Groh, Ziv Epstein, Pattie Maes