Sierra says conversational AI will destroy apps and websites


I may be When I interviewed Brett Taylor and Clay Bower last week about their new AI startup, I inadvertently insulted them. His new company, Sierra, is developing AI-powered agents to “improve the customer experience” for large companies. Its original customers include WeightWatchers, Sonos, SiriusXM and Olukai (a “Hawaiian-inspired” clothing company). Sierra's end market is any company that communicates with their customers, which is a huge opportunity. Their plan strikes me as confirmation of the widely expressed prediction that 2024 will be the year when the AI ​​models that have been bending our minds for the past year will turn into real products. So when I greeted these co-founders, whom I've known for years, I commented that their company sounded “pretty bad.”

Was it wrong to say this? “I don't know if that's a compliment or a criticism or just a fact,” says Taylor, who left his job as co-CEO of Salesforce to start Sierra. I assured him that I saw it as the latter. “It's not like you're building girlfriend, I took note.

It is significant that two of the more visionary leaders in Silicon Valley are building an AI startup not to chase the silly trophy of superintelligence but to use recent AI advances to give a future to non-tech, mainstream corporations. Are. His experience puts him at par with renowned industry leaders; Taylor was a lead developer of Google Maps in the aughts, and Beaver led Google's VR efforts. They are eager to assure me that their hearts are still in moonlight mode. Both believe that Conversational AI is as advanced as graphical user interfaces or smartphones, and will have at least as much impact on our lives. Sierra focuses on a specific, enterprise-y aspect of this. “In the future, a company's AI agent—essentially the AI ​​version of that company—will be as important as their website,” Taylor says. “This will completely change the way companies exist digitally.”

To build its bots in a way that accomplishes that task effectively, pleasantly, and safely, Sierra had to craft some innovations that will advance AI agent technology in general. And to deal with perhaps the most worrisome issue – hallucinations that could misinform customers – Sierra uses several different AI models together, with one model acting as an “observer” to ensure So that the AI ​​agent is not going into the tempting area. , When something with real consequences is about to happen, Sierra uses a strength-in-numbers approach. “If you chat with a WeightWatchers agent and write a message, there are about four or five different big language models applied to decide what to do,” says Taylor.

Because of the power, vast knowledge, and unique understanding of AI's powerful big language models, these digital agents can understand a company's values ​​and processes just like a human being – and perhaps even better than some of the disgruntled workers in the North. Dakota Boiler Room. The training process is more akin to onboarding an employee than feeding rules into a system. Additionally, these bots are so capable that they can be given some agency to respond to the caller's needs. “We found that many of our customers had one policy, and then they had another policy behind that policy, which really makes a difference,” Beaver says. Sierra's agents are sophisticated enough to know this – and smart enough not to spill the secret right away, and to give customers special deals only when they push. Sierra's goal is to shift automated customer interactions from hell to joy.

Courtesy of Sierra

It was nectar to the ears of WeightWatchers, one of Sierra's first customers. When Taylor and Bavor told CEO Sima Sistani that AI agents could be real and trustworthy, he was surprised. But he told me the clincher was when the co-founders told him that conversational AI could provide “empathy at scale.” She was in, and now WeightWatchers is using Sierra-built agents for its customer interactions.

well but empathy, The Merriam-Webster Dictionary defines it as “the act of understanding, becoming aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another.” I asked Sistani whether it might be contradictory to say that a robot could be empathetic. After a pause when I could almost hear the goings-on in his mind, he stuttered and replied. “It's interesting when you put it that way, but we live in a 2D world. Algorithms are helping us determine the next connection we see and the relationship we form. As a society we have moved beyond that.” He This assumption means that interactions with robots cannot be authentic. Of course IRL is ideal, she hastens to say, and agents are more of a complement than a substitute for real life. But she will not back down from claiming sympathy.

When I press him for examples, Sistani tells me of a conversation where a WW member said he had to cancel his membership because of the difficulties. The AI ​​agent attacked her lovingly: “I'm so sorry to hear that… Those difficulties can be so challenging… Let me help you deal with it.” And then, like a fairy godmother, the agent helped her explore options. “We're very clear that it's a virtual assistant,” Sistani says. “But if we weren't, I don't think you could tell the difference.”