Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
Language Models Understand Us, Poorly
Jared Moore
Language Does More Than Describe: On The Lack Of Figurative Speech in Text-To-Image Models
Ricardo Kleinlein, Cristina Luna-Jiménez, Fernando Fernández-Martínez
Towards a neural architecture of language: Deep learning versus logistics of access in neural architectures for compositional processing
Frank van der Velde
Assessing Digital Language Support on a Global Scale
Gary F. Simons, Abbey L. Thomas, Chad K. White
RepsNet: Combining Vision with Language for Automated Medical Reports
Ajay Kumar Tanwani, Joelle Barral, Daniel Freedman
Style Matters! Investigating Linguistic Style in Online Communities
Osama Khalid, Padmini Srinivasan