Systematic Compositionality

Systematic compositionality, the ability to combine learned elements to generate novel, complex outputs, is a crucial aspect of human cognition lacking in current large language models (LLMs). Research focuses on understanding this "compositionality gap" in models like Transformers, using techniques such as specialized training methods (e.g., mutual exclusivity training, primitive augmentation) and prompting strategies (e.g., chain of thought, self-ask) to improve compositional reasoning. These efforts aim to bridge the gap between LLMs' impressive memorization capabilities and their limited ability to perform complex, multi-step reasoning tasks, ultimately advancing both our understanding of intelligence and the capabilities of AI systems.

Papers