Real Human
Research on "Real Human" focuses on understanding and replicating human capabilities, particularly in perception, cognition, and social interaction, using artificial intelligence models. Current efforts concentrate on developing and evaluating large language models (LLMs) and large vision-language models (LVLMs), often incorporating architectures like transformers and diffusion models, to benchmark AI performance against human benchmarks in tasks ranging from visual perception and emotion recognition to complex decision-making and social interaction. These studies aim to improve AI systems' alignment with human behavior and understanding, ultimately impacting fields like human-computer interaction, robotics, and social sciences.
Papers
Learning to Assist Humans without Inferring Rewards
Vivek Myers, Evan Ellis, Sergey Levine, Benjamin Eysenbach, Anca Dragan
Evaluating Creative Short Story Generation in Humans and Large Language Models
Mete Ismayilzada, Claire Stevenson, Lonneke van der Plas
Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations
Michael Kurenkov, Sajad Marvi, Julian Schmidt, Christoph B. Rist, Alessandro Canevaro, Hang Yu, Julian Jordan, Georg Schildbach, Abhinav Valada
Do LLMs write like humans? Variation in grammatical and rhetorical styles
Alex Reinhart, David West Brown, Ben Markey, Michael Laudenbach, Kachatad Pantusen, Ronald Yurko, Gordon Weinberg
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Szymon Bobek, Paloma Korycińska, Monika Krakowska, Maciej Mozolewski, Dorota Rak, Magdalena Zych, Magdalena Wójcik, Grzegorz J. Nalepa
How Aligned are Generative Models to Humans in High-Stakes Decision-Making?
Sarah Tan, Keri Mallari, Julius Adebayo, Albert Gordo, Martin T. Wells, Kori Inkpen
Can LVLMs Describe Videos like Humans? A Five-in-One Video Annotations Benchmark for Better Human-Machine Comparison
Shiyu Hu, Xuchen Li, Xuzhao Li, Jing Zhang, Yipei Wang, Xin Zhao, Kang Hao Cheong
Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
Boyu Gou, Ruohan Wang, Boyuan Zheng, Yanan Xie, Cheng Chang, Yiheng Shu, Huan Sun, Yu Su
The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead?
Alexander S. Choi, Syeda Sabrina Akter, JP Singh, Antonios Anastasopoulos
Cross-lingual Speech Emotion Recognition: Humans vs. Self-Supervised Models
Zhichen Han, Tianqi Geng, Hui Feng, Jiahong Yuan, Korin Richmond, Yuanchao Li
TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans
Aggelina Chatziagapi, Bindita Chaudhuri, Amit Kumar, Rakesh Ranjan, Dimitris Samaras, Nikolaos Sarafianos