Belief State
Belief state research focuses on modeling and understanding how agents (humans or AI) form, update, and utilize beliefs about the world, often in complex, partially observable environments. Current research emphasizes developing algorithms and models, such as those based on Bayesian networks, deep learning, and belief-map assisted training, to accurately represent and reason with belief states, particularly in multi-agent systems and human-AI collaboration. This work is significant for improving AI decision-making in uncertain situations, enhancing human-AI teaming, and providing insights into human cognition and social dynamics, including the spread of misinformation.
Papers
March 29, 2024
March 26, 2024
March 18, 2024
March 6, 2024
February 16, 2024
February 15, 2024
December 22, 2023
December 14, 2023
December 12, 2023
October 28, 2023
October 19, 2023
September 22, 2023
August 5, 2023
July 13, 2023
July 11, 2023
July 6, 2023
June 30, 2023
June 10, 2023
June 1, 2023