Blog
A collection of posts written by various people associated with Developmental Interpretability (since before the agenda was conceived).
You're Measuring Model Complexity Wrong
Jesse Hoogland & Stan van Wingerden
2023-10-11
DSLT 4. Phase Transitions in Neural Networks
Liam Carroll
2023-06-24
DSLT 3. Neural Networks are Singular
Liam Carroll
2023-06-20
DSLT 2. Why Neural Networks obey Occam's Razor
Liam Carroll
2023-06-18
DSLT 0. Distilling Singular Learning Theory
Liam Carroll
2023-06-16
DSLT 1. The RLCT Measures the Effective Dimension of Neural Networks
Liam Carroll
2023-06-16
Approximation is expensive, but the lunch is cheap
Jesse Hoogland
2023-04-19
Empirical risk minimization is fundamentally confused
Jesse Hoogland
2023-03-22
The shallow reality of 'deep learning theory'
Jesse Hoogland
2023-02-22
Gradient surfing: the hidden role of regularization
Jesse Hoogland
2023-02-06
Interview Daniel Murfet on Universal Phenomena in Learning Machines
Alexander Gietelink Oldenziel
2023-02-06
Spooky action at a distance in the loss landscape
Jesse Hoogland
2023-01-28
Neural networks generalize because of this one weird trick
Jesse Hoogland
2023-01-18