Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
Historical patterns of rice farming explain modern-day language use in China and Japan more than modernization and urbanization
Sharath Chandra Guntuku, Thomas Talhelm, Garrick Sherman, Angel Fan, Salvatore Giorgi, Liuqing Wei, Lyle H. Ungar
Explaining Vision and Language through Graphs of Events in Space and Time
Mihai Masala, Nicolae Cudlenco, Traian Rebedea, Marius Leordeanu
Learning to Model the World with Language
Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, Anca Dragan
Deep Dive into the Language of International Relations: NLP-based Analysis of UNESCO's Summary Records
Joanna Wojciechowska, Mateusz Sypniewski, Maria Śmigielska, Igor Kamiński, Emilia Wiśnios, Hanna Schreiber, Bartosz Pieliński
A Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Hum4n L4ngu4ge and the W0rld behind W0rds?
Evelina Leivada, Gary Marcus, Fritz Günther, Elliot Murphy
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon