Ada uses GPT-4 to deliver a new customer service standard
Ada is fueling a $100B shift(opens in a new window) in customer service spend, and at the forefront of this transition is their AI-native customer service automation platform. Founded in 2016, Ada(opens in a new window) is now valued at $1.2B with a total of $200M in funding; customers include Verizon, YETI, Canva, and Square.
Ada isn’t new to AI—they’ve been an AI-native platform since inception. The first generation of the product was built using custom Natural Language Processing (NLP) models that were developed and trained in-house. But they noticed a gap between how many customer questions their platform could handle, and how many queries were truly being resolved in a satisfactory way.
Developing a new customer service metric: resolution rate
The industry-standard metric for customer service, containment rate, measures the number of customer inquiries that are handled entirely by a chatbot and never reach a human agent. If you increase the containment rate, you’ll theoretically reduce operational expenses in customer service. But the team at Ada saw a problem with this metric.
“Many solutions, including Ada, could easily deliver 80–100% containment rates, but if you actually opened up those conversation transcripts and read the experiences customers had, they were quite poor,” Gozzo said.
By focusing their efforts on how well conversations were actually resolved, the team at Ada knew they could set a new standard in the customer service industry.
Initially using GPT-4, combined with their historical data, Ada built a new evaluation framework capable of assessing conversations for how well they were resolved automatically. Ada’s system rates each conversation by how well customers receive relevant, accurate, and safe replies—without human intervention. “In our testing, our system achieved 80–90% agreement with a human reading the conversation,” Gozzo said.
Battle-testing models and choosing OpenAI
By understanding resolution rate, Ada had a north star for their product. In addition to using OpenAI’s API for evaluation, they decided to work with OpenAI to build the new generation of their AI Agent.
Ada had used OpenAI’s models before, mostly to generate training data for their NLP models. When putting OpenAI’s API to the test for their new use cases, Ada noted several differentiators:
Quality of inference: Ada has built a synthetic test framework that simulates hundreds of thousands of end customers talking to their system, so they stress test models on scalability and quality. “We evaluate every major model on a regular basis, within a few weeks of them being released, and so far no one has bested OpenAI on our evaluation set,” Gozzo said.
Latency: To ensure the best performance for Ada’s voice use cases, low-latency models are key. “For real-time reasoning, we’ve seen a big leap in latency and overall quality with GPT-4o,” Gozzo said.
Fine-tuning: Ada uses OpenAI’s fine-tuning API to provide a confidence score about the level of hallucinations in a given response, and uses that score in the rest of the toolchain to minimize the number of hallucinations in the system. “We're also excited about the possibility over time of using fine tuning to further improve our system's ability to reason, and the potential for the technique to allow for smaller, less expensive models to mimic the performance of frontier models,” Gozzo said.
Doubling resolution rates with OpenAI
Today, Ada’s AI Agent uses a multi-agent configuration, with a central planning agent and a collection of subagents, all powered by OpenAI’s API. “Any time you communicate with a business that’s using Ada, your question is going to be fed through multiple turns of OpenAI's models, understanding it, reflecting on it, invoking tools, and bringing in knowledge until the answer is generated,” Gozzo explained.
The previous version of Ada’s product had a 70% containment rate, but a 30% resolution rate. Of the customers Ada has brought to the new system, they have the similar containment results, but typically see a resolution rate of up to 60%, with the highest-performing customers north of 80%.
“From one technology to the next, we’ve doubled the amount of conversations we can automatically resolve with a great experience,” Gozzo said. For Ada’s customers, this metric has a massive downstream impact on ROI, including full time equivalent (FTE) savings, customer satisfaction, retention, and new signups.
100% resolution is achievable with AI
Ada is so confident in their ability to resolve customer queries, they’ve set their sights on a 100% resolution rate, which would have seemed like a fantasy just years ago. “Now, 100% resolution is a when, not an if,” Gozzo said.
The market has shifted, too—Ada doesn’t encounter much skepticism when they talk about automated resolution. “Enterprises are increasingly savvier,” Gozzo said. “They’re really thinking about AI strategies, and championing bringing technology like this into their operations.” Customers can onboard, measure, and coach Ada’s LLM-based AI Agents just like a human agent. Ada’s goals for the next 12 months are to dramatically increase the transparency of their product and the control levers they offer customers.
The sense of possibility is mutual. “One thing that stands out at OpenAI compared to other folks that we work with is a sheer sense of excitement and curiosity,” said Gozzo. “It feels like everybody on the team is really passionate about this. Beyond the business relationship, it’s about how we’re building something together, and that’s really refreshing.”