Human Language
Human language research aims to understand how humans process, produce, and learn language, focusing on both its cognitive and computational aspects. Current research heavily utilizes large language models (LLMs) and vision-language models (VLMs), applying them to tasks like word sense disambiguation, cross-modal reasoning, and the analysis of language in diverse contexts such as online communities and medical images. These advancements are improving machine translation, text-to-speech synthesis, and other applications while also providing new tools for investigating fundamental questions about human cognition and language acquisition.
Papers
ImpScore: A Learnable Metric For Quantifying The Implicitness Level of Language
Yuxin Wang, Xiaomeng Zhu, Weimin Lyu, Saeed Hassanpour, Soroush Vosoughi
Analyzing The Language of Visual Tokens
David M. Chan, Rodolfo Corona, Joonyong Park, Cheol Jun Cho, Yutong Bai, Trevor Darrell
One fish, two fish, but not the whole sea: Alignment reduces language models' conceptual diversity
Sonia K. Murthy, Tomer Ullman, Jennifer Hu
A Capabilities Approach to Studying Bias and Harm in Language Technologies
Hellina Hailu Nigatu, Zeerak Talat
Crystal: Illuminating LLM Abilities on Language and Code
Tianhua Tao, Junbo Li, Bowen Tan, Hongyi Wang, William Marshall, Bhargav M Kanakiya, Joel Hestness, Natalia Vassilieva, Zhiqiang Shen, Eric P. Xing, Zhengzhong Liu
From Imitation to Introspection: Probing Self-Consciousness in Language Models
Sirui Chen, Shu Yu, Shengjie Zhao, Chaochao Lu
Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study
Tianze Wang, Maryam Honari-Jahromi, Styliani Katsarou, Olga Mikheeva, Theodoros Panagiotakopoulos, Oleg Smirnov, Lele Cao, Sahar Asadi