Real Human
Research on "Real Human" focuses on understanding and replicating human capabilities, particularly in perception, cognition, and social interaction, using artificial intelligence models. Current efforts concentrate on developing and evaluating large language models (LLMs) and large vision-language models (LVLMs), often incorporating architectures like transformers and diffusion models, to benchmark AI performance against human benchmarks in tasks ranging from visual perception and emotion recognition to complex decision-making and social interaction. These studies aim to improve AI systems' alignment with human behavior and understanding, ultimately impacting fields like human-computer interaction, robotics, and social sciences.
Papers
Aligning Artificial Intelligence with Humans through Public Policy
John Nay, James Daily
Learn to Predict How Humans Manipulate Large-sized Objects from Interactive Motions
Weilin Wan, Lei Yang, Lingjie Liu, Zhuoying Zhang, Ruixing Jia, Yi-King Choi, Jia Pan, Christian Theobalt, Taku Komura, Wenping Wang
SATBench: Benchmarking the speed-accuracy tradeoff in object recognition by humans and dynamic neural networks
Ajay Subramanian, Sara Price, Omkar Kumbhar, Elena Sizikova, Najib J. Majaj, Denis G. Pelli
Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
Wei-Chiu Ma, Anqi Joyce Yang, Shenlong Wang, Raquel Urtasun, Antonio Torralba