Compositional Language

Compositional language research investigates how meaning is constructed from the combination of individual words and grammatical structures, aiming to understand and replicate this fundamental aspect of human language in artificial systems. Current research focuses on enhancing the compositional abilities of large language models (LLMs) and other deep neural networks (DNNs) through techniques like iterated learning, architectural modifications (e.g., incorporating inductive biases), and improved training methods. This work is significant because it addresses a core challenge in artificial intelligence—achieving robust generalization and flexible reasoning—with implications for improving natural language processing, computer vision, and robotic systems. Furthermore, understanding compositional mechanisms in artificial systems offers valuable insights into the cognitive processes underlying human language comprehension and generation.

Papers