Paper ID: 2410.12924 • Published Oct 16, 2024
Interpreting token compositionality in LLMs: A robustness analysis
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Understanding the internal mechanisms of large language models (LLMs) is
integral to enhancing their reliability, interpretability, and inference
processes. We present Constituent-Aware Pooling (CAP), a methodology designed
to analyse how LLMs process compositional linguistic structures. Grounded in
principles of compositionality, mechanistic interpretability, and information
theory, CAP systematically intervenes in model activations through
constituent-based pooling at various model levels. Our experiments on inverse
definition modelling, hypernym and synonym prediction reveal critical insights
into transformers' limitations in handling compositional abstractions. No
specific layer integrates tokens into unified semantic representations based on
their constituent parts. We observe fragmented information processing, which
intensifies with model size, suggesting that larger models struggle more with
these interventions and exhibit greater information dispersion. This
fragmentation likely stems from transformers' training objectives and
architectural design, preventing systematic and cohesive representations. Our
findings highlight fundamental limitations in current transformer architectures
regarding compositional semantics processing and model interpretability,
underscoring the critical need for novel approaches in LLM design to address
these challenges.