Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
Health Disparities through Generative AI Models: A Comparison Study Using A Domain Specific large language model
Yohn Jairo Parra Bautista, Vinicious Lima, Carlos Theran, Richard Alo
How language of interaction affects the user perception of a robot
Barbara Sienkiewicz, Gabriela Sejnova, Paul Gajewski, Michal Vavrecka, Bipin Indurkhya
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model
Leonie Weissweiler, Valentin Hofmann, Anjali Kantharuban, Anna Cai, Ritam Dutt, Amey Hengle, Anubha Kabra, Atharva Kulkarni, Abhishek Vijayakumar, Haofei Yu, Hinrich Schütze, Kemal Oflazer, David R. Mortensen
Knolling Bot: Learning Robotic Object Arrangement from Tidy Demonstrations
Yuhang Hu, Zhizhuo Zhang, Xinyue Zhu, Ruibo Liu, Philippe Wyder, Hod Lipson
Acoustic and linguistic representations for speech continuous emotion recognition in call center conversations
Manon Macary, Marie Tahon, Yannick Estève, Daniel Luzzati