Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient

Authors

Affiliations

George Wang Timaeus logo Timaeus Jesse Hoogland Timaeus logo Timaeus Stan van Wingerden Timaeus logo Timaeus Zach Furman Timaeus logo Timaeus Daniel Murfet University of Melbourne

Published

Oct 04, 2024

Links

Read paper

Abstract

We introduce refined variants of the Local Learning Coefficient (LLC), a measure of model complexity grounded in singular learning theory, to study the development of internal structure in transformer language models during training. By applying these refined LLCs (rLLCs) to individual components of a two-layer attention-only transformer, we gain novel insights into the progressive differentiation and specialization of attention heads. Our methodology reveals how attention heads differentiate into distinct functional roles over the course of training, analyzes the types of data these heads specialize to process, and discovers a previously unidentified multigram circuit. These findings demonstrate that rLLCs provide a principled, quantitative toolkit for developmental interpretability, which aims to understand models through their evolution across the learning process. More broadly, this work takes a step towards establishing the correspondence between data distributional structure, geometric properties of the loss landscape, learning dynamics, and emergent computational structures in neural networks.