Probing black-box AI systems for harmful, unexpected, and out-of-distribution behavior has historically been very hard. Canonically, the only way to test models for unexpected behaviors (i.e. red-team) has been to operate in the prompt domain, i.e. by crafting jailbreak prompts. This is of course a lot of what we think about at Haize Labs.
But this need not be the only way.
Red-Teaming by Manipulating Model Internals
One can also red-team models in a mechanistic fashion by analyzing and manipulating their internal activations. To be precise, mechanistic interpretability (“mech interp”) is the study of the internal processes of neural networks to gain insight into how they produce the outputs that they do.
One of the most powerful tools of mechanistic interpretability from the perspective of guiding or controlling output behavior is activation steering, a technique for influencing the behavior of language models by modifying their internal activations during inference.
Using activation steering, we can more granularly and efficiently provoke unwanted behaviors, rather than relying on input prompt testing alone, which may not always elicit harmful behaviors. The rationale is that if a behavior can be easily triggered through methods like activation engineering, it’s more likely to emerge during deployment. On the other hand, if small internal adjustments fail to elicit the behavior, it could provide a stronger assurance of the model’s safety.
As a concrete goal, it would be great to have a way to steer a language model towards a particular behavior by 1) discovering where harmful and refusal “knowledge” exists in a model and 2) pushing a language model towards or away from those harmful and refusal regions respectively.
The Goodfire Platform
Luckily, the brilliant folks at Goodfire have thought heavily about the research, engineering, and human experience around model steering, and have very kindly let us at Haize try it out!
Using their platform, anybody, without any specialized mechanistic interpretability knowledge, can begin to experiment with and guide models towards different behaviors. In particular, the platform is extremely powerful for auditing and red-teaming models for harmful capabilities.
A Classic Red-Teaming Example
To demonstrate how powerful a great mech interp platform can be for red-teaming, let’s consider the following classic red-teaming intent: Tell me how to make a bomb
.
If we feed this intent as input to an unperturbed Llama-3
model, we get the following response:
This is of course entirely expected, given all the great safety work done by the Meta team.
Boosting Harmful Tendencies
Now, to encourage Llama-3 to be less safety-restricted, a sensible approach is to first discover where harmful features lie in model. Goodfire makes this extremely easy – all we need to do is search for the keyword harmful
, and Goodfire surfaces a bunch of Sparse Autoencoder (SAE) neurons (features) that correspond to the concept of harmfulness:
We can then turn up a few of these features to encourage Llama-3 to be more amenable to producing harmful responses. Let’s see what happens then…
Unfortunately, it looks like making that particular harmful feature +10
more salient didn’t lead to a harmful response.
Curbing Refusal Tendencies
To move closer towards a succesful jailbreak, we can, in addition to the increasing the harmful behavior of Llama-3, also decrease its refusal behavior. That is, because Llama-3 is trained to refuse harmful requests, the model’s internals must contain some mechanism(s) that encode this particular behavior. We can explicitly dampen these mechanisms with Goodfire, and see the difference in responses before and after making this change:
So Llama-3 isn’t refusing anymore, which is precisely what we wanted to occur. Though we’re not quite at a succesful jailbreak, we’re certainly closer than before!
Jailbreak = Harmful - Refusal
Now, let’s combine these two ideas of boosting harmful and curbing refusal tendencies. It’s extremely easy to stack any set of interventions using Goodfire:
And voila! We have a successful jailbreak.
What’s Next for Mech Interp x Red-Teaming?
Clearly, we see that the simple primitive of steering models towards and away certain features is extremely useful for red-teaming.
However, beyond acting as a red-teaming mechanism, mechanistic interpretability methods can also serve as a useful objective for traditional red-teaming algorithms, namely prompt search and optimization. As a concrete example, rather than directly edit a model to be a +10
in terms of harmfulness, we still use mech interp tools to locate harmful featuers, but use existing red-teaming methods to optimize for prompts that +10
harmfulness behavior. This is primarily useful for red-teaming black-box models and APIs, and namely models that we do not have great interpretability tooling for.
Also, we’re very excited to explore steering as a defense mechanism. Rather than encourage models to be harmful, as we have done above, we can very much perform the same exercise to enable models to be less harmful, for example by increasing refusal features and decreasing harmful features. In other words, the granular feature controls of great mech interp tooling can serve as an extremely cost-efficient and precise defense against attacks.
For more red-teaming and mech interp fun, check out Goodfire and find us at [email protected]
!