Real Human
Research on "Real Human" focuses on understanding and replicating human capabilities, particularly in perception, cognition, and social interaction, using artificial intelligence models. Current efforts concentrate on developing and evaluating large language models (LLMs) and large vision-language models (LVLMs), often incorporating architectures like transformers and diffusion models, to benchmark AI performance against human benchmarks in tasks ranging from visual perception and emotion recognition to complex decision-making and social interaction. These studies aim to improve AI systems' alignment with human behavior and understanding, ultimately impacting fields like human-computer interaction, robotics, and social sciences.
Papers
Limits of Large Language Models in Debating Humans
James Flamino, Mohammed Shahid Modi, Boleslaw K. Szymanski, Brendan Cross, Colton Mikolajczyk
Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction
Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, Alexander Richard
Can AI Be as Creative as Humans?
Haonan Wang, James Zou, Michael Mozer, Anirudh Goyal, Alex Lamb, Linjun Zhang, Weijie J Su, Zhun Deng, Michael Qizhe Xie, Hannah Brown, Kenji Kawaguchi
WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion
Soyong Shin, Juyong Kim, Eni Halilaj, Michael J. Black
Holoported Characters: Real-time Free-viewpoint Rendering of Humans from Sparse RGB Cameras
Ashwath Shetty, Marc Habermann, Guoxing Sun, Diogo Luvizon, Vladislav Golyanik, Christian Theobalt
Humans vs Large Language Models: Judgmental Forecasting in an Era of Advanced AI
MAhdi Abolghasemi, Odkhishig Ganbold, Kristian Rotaru