Paper ID: 2408.09523
A Unified Framework for Interpretable Transformers Using PDEs and Information Theory
Yukun Zhang
This paper presents a novel unified theoretical framework for understanding Transformer architectures by integrating Partial Differential Equations (PDEs), Neural Information Flow Theory, and Information Bottleneck Theory. We model Transformer information dynamics as a continuous PDE process, encompassing diffusion, self-attention, and nonlinear residual components. Our comprehensive experiments across image and text modalities demonstrate that the PDE model effectively captures key aspects of Transformer behavior, achieving high similarity (cosine similarity > 0.98) with Transformer attention distributions across all layers. While the model excels in replicating general information flow patterns, it shows limitations in fully capturing complex, non-linear transformations. This work provides crucial theoretical insights into Transformer mechanisms, offering a foundation for future optimizations in deep learning architectural design. We discuss the implications of our findings, potential applications in model interpretability and efficiency, and outline directions for enhancing PDE models to better mimic the intricate behaviors observed in Transformers, paving the way for more transparent and optimized AI systems.
Submitted: Aug 18, 2024