Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
Distributed agency in second language learning and teaching through generative AI
Robert Godwin-Jones
Development of Compositionality and Generalization through Interactive Learning of Language and Action of Robots
Prasanna Vijayaraghavan, Jeffrey Frederic Queisser, Sergio Verduzco Flores, Jun Tani
Context-Aware Integration of Language and Visual References for Natural Language Tracking
Yanyan Shao, Shuting He, Qi Ye, Yuchao Feng, Wenhan Luo, Jiming Chen
LITA: Language Instructed Temporal-Localization Assistant
De-An Huang, Shijia Liao, Subhashree Radhakrishnan, Hongxu Yin, Pavlo Molchanov, Zhiding Yu, Jan Kautz
Language Plays a Pivotal Role in the Object-Attribute Compositional Generalization of CLIP
Reza Abbasi, Mohammad Samiei, Mohammad Hossein Rohban, Mahdieh Soleymani Baghshah