Paper ID: 2402.19406

On the Scaling Laws of Geographical Representation in Language Models

Nathan Godey, Éric de la Clergerie, Benoît Sagot

Language models have long been shown to embed geographical information in their hidden representations. This line of work has recently been revisited by extending this result to Large Language Models (LLMs). In this paper, we propose to fill the gap between well-established and recent literature by observing how geographical knowledge evolves when scaling language models. We show that geographical knowledge is observable even for tiny models, and that it scales consistently as we increase the model size. Notably, we observe that larger language models cannot mitigate the geographical bias that is inherent to the training data.

Submitted: Feb 29, 2024