Belief Representation

Belief representation in artificial intelligence focuses on understanding how systems, particularly large language models (LLMs), internally represent and manipulate beliefs about the world. Current research emphasizes developing formal frameworks and evaluation criteria for assessing the accuracy, coherence, and usability of these internal representations, often drawing on concepts from epistemic logic and decision theory. This work is crucial for improving the reliability and trustworthiness of AI systems, as well as advancing our fundamental understanding of knowledge representation and reasoning. Ongoing efforts explore various model architectures, including those based on epistemic logic programs and model transformations, to better capture the nuances of belief revision and management in dynamic environments.

Papers