Paper ID: 2311.11477
What's left can't be right -- The remaining positional incompetence of contrastive vision-language models
Nils Hoehing, Ellen Rushe, Anthony Ventresque
Contrastive vision-language models like CLIP have been found to lack spatial understanding capabilities. In this paper we discuss the possible causes of this phenomenon by analysing both datasets and embedding space. By focusing on simple left-right positional relations, we show that this behaviour is entirely predictable, even with large-scale datasets, demonstrate that these relations can be taught using synthetic data and show that this approach can generalise well to natural images - improving the performance on left-right relations on Visual Genome Relations.
Submitted: Nov 20, 2023