Extracting Concepts from GPT-4
We used new scalable methods to decompose GPT-4’s internal representations into 16 million oft-interpretable patterns.
We currently don't understand how to make sense of the neural activity within language models. Today, we are sharing improved methods for finding a large number of "features"—patterns of activity that we hope are human interpretable. Our methods scale better than existing work, and we use them to find 16 million features in GPT-4. We are sharing a paper(opens in a new window), code(opens in a new window), and feature visualizations(opens in a new window) with the research community to foster further exploration.
The challenge of interpreting neural networks
Unlike with most human creations, we don’t really understand the inner workings of neural networks. For example, engineers can directly design, assess, and fix cars based on the specifications of their components, ensuring safety and performance. However, neural networks are not designed directly; we instead design the algorithms that train them. The resulting networks are not well understood and cannot be easily decomposed into identifiable parts. This means we cannot reason about AI safety the same way we reason about something like car safety.
In order to understand and interpret neural networks, we first need to find useful building blocks for neural computations. Unfortunately, the neural activations inside a language model activate with unpredictable patterns, seemingly representing many concepts simultaneously. They also activate densely, meaning each activation is always firing on each input. But real world concepts are very sparse—in any given context, only a small fraction of all concepts are relevant. This motivates the use of sparse autoencoders, a method for identifying a handful of "features" in the neural network that are important to producing any given output, akin to the small set of concepts a person might have in mind when reasoning about a situation. Their features display sparse activation patterns that naturally align with concepts easy for humans to understand, even without direct incentives for interpretability.
However, there are still serious challenges to training sparse autoencoders. Large language models represent a huge number of concepts, and our autoencoders may need to be correspondingly huge to get close to full coverage of the concepts in a frontier model. Learning a large number of sparse features is challenging, and past work has not been shown to scale well.
Our research progress: large scale autoencoder training
We developed new state-of-the-art methodologies which allow us to scale our sparse autoencoders to tens of millions of features on frontier AI models. We find that our methodology demonstrates smooth and predictable scaling, with better returns to scale than prior techniques. We also introduce several new metrics for evaluating feature quality.
We used our recipe to train a variety of autoencoders on GPT-2 small and GPT-4 activations, including a 16 million feature autoencoder on GPT-4. To check interpretability of features, we visualize a given feature by showing documents where it activates. Here are some interpretable features we found:
GPT-4 feature: phrases relating to things (especially humans) being flawed
View full visualization(opens in a new window)We found many other interesting features, which you can browse here(opens in a new window).
Limitations
We are excited for interpretability to eventually increase model trustworthiness and steerability. However, this is still early work with many limitations:
Like previous works, many of the discovered features are still difficult to interpret, with many activating with no clear pattern or exhibiting spurious activations unrelated to the concept they seem to usually encode. Furthermore, we don't have good ways to check the validity of interpretations.
The sparse autoencoder does not capture all the behavior of the original model. Currently, passing GPT-4’s activations through the sparse autoencoder results in a performance equivalent to a model trained with roughly 10x less compute. To fully map the concepts in frontier LLMs, we may need to scale to billions or trillions of features, which would be challenging even with our improved scaling techniques.
Sparse autoencoders can find features at one point in the model, but that’s only one step towards interpreting the model. Much further work is required to understand how the model computes those features and how those features are used downstream in the rest of the model.
Looking ahead, and open sourcing our research
While sparse autoencoder research is exciting, there is a long road ahead with many unresolved challenges. In the short term, we hope the features we've found can be practically useful for monitoring and steering language model behaviors and plan to test this in our frontier models. Ultimately, we hope that one day, interpretability can provide us with new ways to reason about model safety and robustness, and significantly increase our trust in powerful AI models by giving strong assurances about their behavior.
Today, we are sharing a paper(opens in a new window) detailing our experiments and methods, which we hope will make it easier for researchers to train autoencoders at scale. We are releasing a full suite of autoencoders for GPT-2 small, along with code(opens in a new window) for using them, and the feature visualizer(opens in a new window) to get a sense of what the GPT-2 and GPT-4 features may correspond to.