Mechanistic Interpretability
Mechanistic interpretability aims to understand how neural networks, particularly large language models (LLMs) and image models, perform computations by reverse-engineering their internal mechanisms. Current research focuses on identifying and characterizing "circuits"—minimal subnetworks responsible for specific tasks—within transformer architectures, often using techniques like sparse autoencoders and activation patching to analyze neuron activations and attention mechanisms. This work is crucial for improving model reliability, safety, and trustworthiness, as well as for gaining fundamental insights into the nature of artificial intelligence and its relationship to human cognition.
Papers
Mechanistic interpretability of large language models with applications to the financial services industry
Ashkan Golgoon, Khashayar Filom, Arjun Ravi Kannan
Learning biologically relevant features in a pathology foundation model using sparse autoencoders
Nhat Minh Le, Ciyue Shen, Neel Patel, Chintan Shah, Darpan Sanghavi, Blake Martin, Alfred Eng, Daniel Shenker, Harshith Padigela, Raymond Biju, Syed Ashar Javed, Jennifer Hipp, John Abel, Harsha Pokkalla, Sean Grullon, Dinkar Juyal