Belief Model
Belief models aim to represent and reason about an agent's (human or artificial) understanding of the world, encompassing both certainties and uncertainties. Current research focuses on developing robust belief representations for large language models, improving belief tracking in dynamic environments (e.g., robotics), and incorporating belief formation into models of human and social behavior, often employing techniques like Bayesian methods, Random Finite Sets, and neural networks. These advancements have implications for improving AI systems' decision-making under uncertainty, enhancing human-robot interaction, and providing deeper insights into cognitive processes.
Papers
September 11, 2024
July 1, 2024
May 31, 2024
May 25, 2024
September 17, 2023
August 5, 2023
June 30, 2023
June 6, 2023
January 18, 2023
November 15, 2022
August 21, 2022
June 21, 2022
June 12, 2022
May 13, 2022