Compositionality Benchmark

Compositionality benchmarks evaluate the ability of artificial intelligence models, particularly large language and vision-language models, to combine known concepts into novel, meaningful expressions, mirroring a core aspect of human intelligence. Current research focuses on developing diverse benchmarks across various tasks (e.g., text-to-image synthesis, machine translation, robotic manipulation) and analyzing the performance of different model architectures, including contrastive learning approaches and multimodal attention mechanisms, on these benchmarks. These efforts aim to improve model understanding of compositional language and visual information, ultimately leading to more robust and reliable AI systems with broader applications. The resulting insights are crucial for advancing the development of more human-like AI and for identifying limitations in current models.

Papers