False Belief

False belief tasks assess the ability to understand that others hold beliefs different from one's own and from reality—a crucial aspect of Theory of Mind (ToM). Current research focuses on whether large language models (LLMs) genuinely possess ToM or merely exploit statistical correlations in training data, investigating this through variations of classic false-belief tests and analyzing internal model components. Studies comparing LLMs' performance on these tasks to that of children reveal varying degrees of success depending on model architecture and training, highlighting the complexities of attributing human-like cognitive abilities to AI. These findings are significant for understanding both the capabilities and limitations of advanced AI systems and for advancing our theoretical understanding of ToM itself.

Papers