Human Perception
Human perception research investigates how humans interpret sensory information, aiming to understand the mechanisms underlying our experience of the world and how these differ from machine perception. Current research focuses on aligning machine learning models with human perceptual judgments across various modalities (vision, audio, language), often employing large language models and deep neural networks to analyze and predict human responses to stimuli, including AI-generated content. This work is crucial for improving AI systems, enhancing human-computer interaction, and addressing biases in algorithms by grounding them in a more accurate understanding of human cognitive processes.
Papers
Towards Geographic Inclusion in the Evaluation of Text-to-Image Models
Melissa Hall, Samuel J. Bell, Candace Ross, Adina Williams, Michal Drozdzal, Adriana Romero Soriano
POV Learning: Individual Alignment of Multimodal Models using Human Perception
Simon Werner, Katharina Christ, Laura Bernardy, Marion G. Müller, Achim Rettinger
Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes
Ammarah Hashmi, Sahibzada Adil Shahzad, Chia-Wen Lin, Yu Tsao, Hsin-Min Wang