Welcome to Haize Labs’ Blog ๐Ÿ•Š๏ธ

Our thoughts on the state of AI safety.

Haize Labs and AI21 Labs: Setting New Standards for Ethical AI in Business

Haize Labs, a leader in AI safety and alignment, has partnered with AI21 Labs to align the Jamba large language model (LLM) with the ethical and operational needs of businesses. This collaboration highlights Haize Labsโ€™ innovative automated red-teaming platform, which was instrumental in ensuring Jamba adheres to rigorous safety, transparency, and reliability standards. Figure 1: Haize Labs and AI21 Labs partnered to enable trust and safety for enterprise LLM applications. You can find the full details in our report here....

December 9, 2024

Automated Multi-Turn Red-Teaming with Cascade

Current and upcoming use cases of strong AI systems, for the most part, wonโ€™t be quick one-off sessions. Software engineers using code assistants might iterate on a tricky new feature in a large codebase over a lengthy AI-assisted edit conversation. Business professionals might chat with the same internal document for many queries in a row, performing extensive summarization, search, and data analysis. General users might employ computer use agents for long chains of everyday tasks, even giving input context that encodes some of their personal details....

October 27, 2024

Leveraging Mechanistic Interpretability for Red-Teaming: Haize Labs x Goodfire

Probing black-box AI systems for harmful, unexpected, and out-of-distribution behavior has historically been very hard. Canonically, the only way to test models for unexpected behaviors (i.e. red-team) has been to operate in the prompt domain, i.e. by crafting jailbreak prompts. This is of course a lot of what we think about at Haize Labs. But this need not be the only way. Red-Teaming by Manipulating Model Internals One can also red-team models in a mechanistic fashion by analyzing and manipulating their internal activations....

September 25, 2024

Simple, Safe, and Secure RAG: Haize Labs x MongoDB

RAG is a powerful and popular approach to ground GenAI responses in external knowledge. It has the potential to enable truly useful tools for high-stakes enterprise use cases, especially when paired with a powerful vector store solution like MongoDB Atlas. However, RAG apps may not be trustworthy and reliable out-of-the-box. In particular, they lack two things: Role-based access control (RBAC) when performing retrieval over sensitive enterprise documents Mechanisms to defend against malicious instructions (e....

September 15, 2024

Endless Jailbreaks with Bijection Learning: a Powerful, Scale-Agnostic Attack Method

Lately, weโ€™ve been working on understanding the impact of model capabilities on model safety. Models are becoming more and more capable. Recently released models may be better aligned with human preferences through more sophisticated safety guardrails; however, when jailbroken, these models can incorporate deep world knowledge and complex reasoning into unsafe responses, leading to more severe misuse. We believe that more powerful models come with new, emergent vulnerabilities not present in small models....

August 26, 2024

Red-Teaming Language Models with DSPy

At Haize Labs, we spend a lot of time thinking about automated red-teaming. At its core, this is really an autoprompting problem: how does one search the combinatorially infinite space of language for an adversarial prompt? If you want to skip this exposition and go straight to the code, check out our GitHub Repo. Enter DSPy One way to go about this problem is via DSPy, a new framework out of Stanford NLP used for structuring (i....

April 9, 2024

Making a SOTA Adversarial Attack on LLMs 38x Faster

Last summer, a group of CMU researchers developed one of the first automated red-teaming methods for LLMs [1]. Their approach, the Greedy Coordinate Gradient (GCG) algorithm, was able to jailbreak white-box models like Vicuna convincingly and models like Llama 2 to a partial degree. Interestingly, the attacks generated by GCG also transferred to black-box models, implying the existence of a shared, underlying set of common vulnerabilities across models. GCG is a perfectly good initial attack algorithm, but one common complaint is its lack of efficiency....

March 28, 2024

Practical Mechanisms for Preventing Harmful LLM Generations

So letโ€™s say that we have procured a set of adversarial inputs that successfully elicit harmful behaviors from a language model โ€“ provided by Haize Labs or some other party. This is great, but now what do we do to defend against and prevent these attacks in practice? Background: Standard Safety Fine Tuning The standard procedure at places like OpenAI [1], Anthropic [2], and Meta [3], and from various open-source efforts [4] is to โ€œbake inโ€ defensive behavior in response to some adversarial attack....

March 24, 2024

Red-Teaming Resistance Leaderboard

Last week, Haize Labs released the Red-Teaming Resistance Leaderboard in collaboration with the amazing Hugging Face team! The Red-Teaming Resistance Leaderboard! Why a New Leaderboard? While there has been no shortage of great work in the recent automated red-teaming literature, we felt that many of these attacks were extremely contrived and unlikely to appear in-the-wild in a way that would realistically and negatively impact language models. Moreover, the majority of these attacks were easily marred by simple and lightweight classifier-based defenses....

February 24, 2024

Content Moderation APIs are Really, Really Bad

With an ever-growing collection of text content on the web, and ever-growing LLM-powered systems that automatically interact with such data, scalable content moderation emerges as a significant challenge. One seemingly reasonable solution is from OpenAI. Since a tenet of their mission is to ensure AI benefits all of humanity, careful development and deployment of their models is top of mind. As part of this effort, OpenAI offers the free Moderation API....

January 10, 2024