Real Human
Research on "Real Human" focuses on understanding and replicating human capabilities, particularly in perception, cognition, and social interaction, using artificial intelligence models. Current efforts concentrate on developing and evaluating large language models (LLMs) and large vision-language models (LVLMs), often incorporating architectures like transformers and diffusion models, to benchmark AI performance against human benchmarks in tasks ranging from visual perception and emotion recognition to complex decision-making and social interaction. These studies aim to improve AI systems' alignment with human behavior and understanding, ultimately impacting fields like human-computer interaction, robotics, and social sciences.
Papers
Learning to Learn: How to Continuously Teach Humans and Machines
Parantak Singh, You Li, Ankur Sikarwar, Weixian Lei, Daniel Gao, Morgan Bruce Talbot, Ying Sun, Mike Zheng Shou, Gabriel Kreiman, Mengmi Zhang
Fine-tuning language models to find agreement among humans with diverse preferences
Michiel A. Bakker, Martin J. Chadwick, Hannah R. Sheahan, Michael Henry Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matthew M. Botvinick, Christopher Summerfield