Human Mind
Research on the human mind currently focuses on understanding complex cognitive processes like Theory of Mind (ToM), the ability to understand others' mental states, and its implications for human-AI interaction and collaboration. This involves developing and evaluating computational models, often employing large language models (LLMs) and multimodal architectures, to predict and simulate human behavior in various social and collaborative contexts. Key areas of investigation include improving the accuracy and efficiency of these models, particularly in handling uncertainty and noisy data, and exploring the ethical implications of increasingly sophisticated AI systems capable of understanding and responding to human mental states. These advancements have significant implications for improving human-computer interaction, developing more effective assistive technologies, and furthering our understanding of the human mind itself.
Papers
Large Model Strategic Thinking, Small Model Efficiency: Transferring Theory of Mind in Large Language Models
Nunzio Lore, Sepehr Ilami, Babak Heydari
Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information
Yauwai Yim, Chunkit Chan, Tianyu Shi, Zheye Deng, Wei Fan, Tianshi Zheng, Yangqiu Song
Hypothetical Minds: Scaffolding Theory of Mind for Multi-Agent Tasks with Large Language Models
Logan Cross, Violet Xiang, Agam Bhatia, Daniel LK Yamins, Nick Haber
Explicit Modelling of Theory of Mind for Belief Prediction in Nonverbal Social Interactions
Matteo Bortoletto, Constantin Ruhdorfer, Lei Shi, Andreas Bulling
Dissecting the Ullman Variations with a SCALPEL: Why do LLMs fail at Trivial Alterations to the False Belief Task?
Zhiqiang Pi, Annapurna Vadaparty, Benjamin K. Bergen, Cameron R. Jones
Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning
Lynn Chua, Badih Ghazi, Yangsibo Huang, Pritish Kamath, Ravi Kumar, Daogao Liu, Pasin Manurangsi, Amer Sinha, Chiyuan Zhang