Real Human
Research on "Real Human" focuses on understanding and replicating human capabilities, particularly in perception, cognition, and social interaction, using artificial intelligence models. Current efforts concentrate on developing and evaluating large language models (LLMs) and large vision-language models (LVLMs), often incorporating architectures like transformers and diffusion models, to benchmark AI performance against human benchmarks in tasks ranging from visual perception and emotion recognition to complex decision-making and social interaction. These studies aim to improve AI systems' alignment with human behavior and understanding, ultimately impacting fields like human-computer interaction, robotics, and social sciences.
Papers
Compositional learning of functions in humans and machines
Yanli Zhou, Brenden M. Lake, Adina Williams
Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies
Eli Ben-Michael, D. James Greiner, Melody Huang, Kosuke Imai, Zhichao Jiang, Sooahn Shin
SculptDiff: Learning Robotic Clay Sculpting from Humans with Goal Conditioned Diffusion Policy
Alison Bartsch, Arvind Car, Charlotte Avra, Amir Barati Farimani
Belief Aided Navigation using Bayesian Reinforcement Learning for Avoiding Humans in Blind Spots
Jinyeob Kim, Daewon Kwak, Hyunwoo Rim, Donghan Kim