Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
Identifying Quantum Mechanical Statistics in Italian Corpora
Diederik Aerts, Jonito Aerts Arguëlles, Lester Beltran, Massimiliano Sassoli de Bianchi, Sandro Sozzo
Searching for Structure: Investigating Emergent Communication with Large Language Models
Tom Kouwenhoven, Max Peeperkorn, Tessa Verhoef
RFL: Simplifying Chemical Structure Recognition with Ring-Free Language
Qikai Chang, Mingjun Chen, Changpeng Pi, Pengfei Hu, Zhenrong Zhang, Jiefeng Ma, Jun Du, Baocai Yin, Jinshui Hu
Driving with InternVL: Oustanding Champion in the Track on Driving with Language of the Autonomous Grand Challenge at CVPR 2024
Jiahan Li, Zhiqi Li, Tong Lu