The trouble with great chatbots

Customers love the support they get from most AI chatbots—until they find out they're talking to a chatbot. How can companies solve this dilemma?

Dan Tynan

Dan TynanThe Works Contributor

Aug 15, 20246 MINS READ

As organizations experiment with how to get the most out of their AI investments, one use case is clearly ahead of the pack: customer support. 

Nearly 9 out of 10 executives polled by IBM believe their companies will be using generative AI to interact with customers in the next two years. Investment in gen AI support solutions is increasing at around 25% per year and is on track to exceed $3 billion by 2033

Automating different facets of customer support doesn’t just reduce costs; it has the potential to deliver a superior customer experience. The most advanced AI chatbots, in fact, can even seem more empathic—more human, if you will—than flesh-and-blood service agents. 

Case in point is a recent study by researchers at the University of Southern California, which found that AI chatbots were better at detecting customers’ emotions than human support staff. By offering more emotional support (and fewer helpful-but-less-empathetic suggestions), the bots allowed customers to feel heard in a way that the service reps didn’t. According to the study, customers “reported a higher sense of hope, reduced distress, and decreased discomfort from reading a response actually generated by AI."

But there's an enormous fly in this ointment: When the test subjects were told that the sympathetic support rep they were chatting with was AI, their ratings flipped: They felt less heard by the robots.

It's all about finding the right balance between technology and humans.

Shep Hyken

Customer Service Expert and Author

It's not an isolated result. A team of European researchers reached a similar conclusion: Even when bots provide service of identical quality, customers' perception of the experience is negative, in part because they feel like companies are deploying bots to cut costs, not to improve service. And 64% of consumers polled by Gartner say they don't want companies to use bots for customer service.

This puts support operations managers in a pickle. How do you take advantage of the efficiencies of generative AI without alienating your customers? The answer may lie in understanding precisely where AI has reached its limit and it's time for people to take the wheel.

The AI-agent handoff

Being unhappy with automated support systems is not new to gen AI. Nearly 80% of consumers admit to screaming "Representative!" or "Agent!" when talking to interactive voice response (IVR) systems, according to the 2024 Achieving Customer Amazement (ACA) survey

Other studies show that when customers are already unhappy about a product or service, dealing with a bot that's pretending to be a person just makes them angrier

But as technology improves and more intelligent chatbots replace their primitive predecessors, people will get more comfortable with automated customer service, notes Shep Hyken, a bestselling author and speaker on customer experience who published the ACA study.

"It's all about finding the right balance between technology and humans," he says. "You don't really need an empathic bot to tell you if your shipment will arrive tomorrow or your check cleared. Just have the bot give the answers and thank them."

Read also: The next-gen customer support agent

How can service managers determine the ideal handoff point? The answer may lie in creating different service tiers—some strictly handled by AI, others by a tandem of people and machines—and routing customers based on the complexity of their problems, the importance of the customer relationship, or their level of frustration.

Simple intent-based questions such as “Where are you located?” or “What hours are you open?” should be left to bots, while queries involving sensitive customer data should be handled by humans, advises Kelwin Fernandes, CEO of NILG.AI, an AI consultancy that has built customer support assistants for multiple clients.

"We always suggest keeping a human in the loop for high-ticket items where a customer is buying an experience in addition to a product," he adds.

A multitiered solution can be both more cost-effective for companies and better for customers. For example, several years ago, financial services CRM platform Total Expert was drowning in support tickets. Customer service reps often needed more than a week to answer simple customer questions about mortgages and other financial products. 

After implementing the Freshdesk service platform, the company created a two-tiered support system accessible via a web portal. Lower-tier customers could receive answers from an enhanced knowledge base or ask questions directly of Freddy AI, an intelligent digital assistant. High-priority customers were routed directly to a human agent assisted by Freddy Copilot, a gen AI app that gave them a complete view of each customer's history and helped shape the tone and content of their responses.

Today, 40% of the company's tickets are handled by AI chatbots, and the average interaction time is under 20 minutes. This combination of AI plus humans has slashed TE's support costs by more than $120,000 each year. 

Customer service leaders

Top CX priorities for 2024

Full disclosure, or not?

These are early days of gen AI bots, and companies are still grappling with lots of questions. A key issue is how to escalate from AI to a human when the chatbot isn't providing satisfactory answers, notes Dave Stubberfield, co-author of “Supercharging the Customer Experience: How Organizational Alignment Drives Performance.” 

Stubberfield says he's seen early versions of bots that can detect when customers are unhappy and automatically route them to human agents, but companies still have to figure out how to manage these handoffs.

"If the chatbot says: 'We're not getting anywhere here—I'm going to transfer you to someone' and you end up waiting in another queue for five minutes, you won't be very happy," he adds. "That's when a lot of people will realize, 'Oh, I've been chatting with a bot this whole time.'"

If you're going to be talking with a chatbot, that should be clear.

Christos Makridis

Associate Professor, Arizona State University

Then there's the question of disclosure. Given that many people react negatively when they know they're being helped by a robot (even a competent one), should companies even tell them? 

Many organizations may not have a choice. The European Union's Artificial Intelligence Act requires companies to inform people when they are interacting with AI. In the U.S., only Utah has passed a law requiring companies to disclose their use of AI chatbots, but more regulations are likely. 

"Generally speaking, the use of AI needs to be clarified," says Christos Makridis, an associate professor at Arizona State University who has co-authored multiple papers on ethical applications of AI. "If you're going to be talking with a chatbot, that should be clear. But in reality, some companies that use AI don't share that information." 

Makridis adds that the recent downward trend in customer satisfaction scores may be due in part to poor experiences that consumers often have with chatbots, combined with the difficulty of reaching human agents who can help. 

A better way to avoid bot blowback is to create service bots that people like, argues a quartet of European AI researchers in a recent Harvard Business Review article. Their prescriptions include taking advantage of large language models' ability to mimic natural language, focusing on tasks where bots clearly outperform humans, and sharing the economic benefits of service automation by providing incentives, such as discounts or preferential treatment.

"Customers will benefit from the faster service and more seamless interactions AI chatbots will provide," says Emmanuel Probst, global lead for brand growth and thought leadership at Ipsos North America. "But keep in mind that no AI bot actually understands emotions. Whether you want to talk to one depends on your expectations for the experience and how mission-critical it is for you.”