Paper ID: 2412.07947
GPT-2 Through the Lens of Vector Symbolic Architectures
Johannes Knittel, Tushaar Gangavarapu, Hendrik Strobelt, Hanspeter Pfister
Understanding the general priniciples behind transformer models remains a complex endeavor. Experiments with probing and disentangling features using sparse autoencoders (SAE) suggest that these models might manage linear features embedded as directions in the residual stream. This paper explores the resemblance between decoder-only transformer architecture and vector symbolic architectures (VSA) and presents experiments indicating that GPT-2 uses mechanisms involving nearly orthogonal vector bundling and binding operations similar to VSA for computation and communication between layers. It further shows that these principles help explain a significant portion of the actual neural weights.
Submitted: Dec 10, 2024