Paper ID: 2306.08221

Contrastive Loss is All You Need to Recover Analogies as Parallel Lines

Narutatsu Ri, Fei-Tzin Lee, Nakul Verma

While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.

Submitted: Jun 14, 2023