Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
Correlation Does Not Imply Compensation: Complexity and Irregularity in the Lexicon
Amanda Doucette, Ryan Cotterell, Morgan Sonderegger, Timothy J. O'Donnell
Accelerating evolutionary exploration through language model-based transfer learning
Maximilian Reissmann, Yuan Fang, Andrew S. H. Ooi, Richard D. Sandberg
LOLAMEME: Logic, Language, Memory, Mechanistic Framework
Jay Desai, Xiaobo Guo, Srinivasan H. Sengamedu
Amortizing intractable inference in diffusion models for vision, language, and control
Siddarth Venkatraman, Moksh Jain, Luca Scimeca, Minsu Kim, Marcin Sendera, Mohsin Hasan, Luke Rowe, Sarthak Mittal, Pablo Lemos, Emmanuel Bengio, Alexandre Adam, Jarrid Rector-Brooks, Yoshua Bengio, Glen Berseth, Nikolay Malkin
Image captioning in different languages
Emiel van Miltenburg
Learning the Language of Protein Structure
Benoit Gaujac, Jérémie Donà, Liviu Copoiu, Timothy Atkinson, Thomas Pierrot, Thomas D. Barrett
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin, Sam Whitman McGrath, Danielle J. Williams, Lotem Elber-Dorozko