Belief Model

Belief models aim to represent and reason about an agent's (human or artificial) understanding of the world, encompassing both certainties and uncertainties. Current research focuses on developing robust belief representations for large language models, improving belief tracking in dynamic environments (e.g., robotics), and incorporating belief formation into models of human and social behavior, often employing techniques like Bayesian methods, Random Finite Sets, and neural networks. These advancements have implications for improving AI systems' decision-making under uncertainty, enhancing human-robot interaction, and providing deeper insights into cognitive processes.

Papers