Business Information

Technology insights for the data-driven enterprise

Rob - Fotolia

Chatbot technology raises ethical questions

New AI platforms have given life to chatbots that decipher customer intents so well, chances are you've carried on an email conversation with one and were never the wiser.

Through the course of reporting on the use of artificial intelligence in business applications, I visited Conversica's website to request a live demo of its chatbot technology. I filled out the online form, and a friendly sales assistant named Rachel Brooks followed up via email to provide me with more information. When I didn't respond, persistent as salespeople can be, she sent another email:

Subject: Rachel Brooks with Conversica following up

Hi Bridget,

I want to follow up on the email I sent you to see if we could answer any questions you might have about our artificial intelligence lead engagement software. Would you be interested in setting up a call for more information?

Thank you very much,

Rachel Brooks | Sales Assistant

I mentioned to Rachel that I'd spoken with the company's head of product marketing, Gary Gerber, and Rachel and I moved on with our lives; mine real, hers virtual.

Rachel and I moved on with our lives; mine real, hers virtual

"As you suspect, Rachel is, in reality, my Conversica automated assistant," Gerber explained. "She made my sales team more efficient; my marketing team more effective; and, most importantly, made sure that you, the customer, had a great experience. Everybody wins. What more can one ask from automation, right?"

Wait, that wasn't a real person?

Chatbots like "Rachel Brooks" are being put to work in more and more companies as they seek ways to free up customer service agents as well as marketing and sales staff to focus on more complicated tasks. And use of artificial intelligence (AI) technology in these and other applications is only expected to increase; Gartner predicts more than 85% of customer interactions will be managed without a human by 2020.

The rising use of chatbot technology makes sense; it works well, and it's available 24-7. Still, when I found out I'd been interacting with a bot -- all the while thinking she was a living, breathing person -- I felt duped. And I'm not alone. One tech industry analyst who suspected he was having an email conversation with a bot because "the speed of response and use of language suggested inhuman efficiency" wondered whether legislation needs to be drafted to force companies to identify bots. "If only so I knew when to rant and when to persuade people," Simon Bramfitt wrote on Twitter.

That's a good point, but the idea of legislation gives some industry insiders pause. David Vandegrift, a venture capitalist who invests in AI technology startups and hosts the Chicago Artificial Intelligence Practitioners and Investors - Meetup group, resists the idea of AI legislation because it could negatively impact research, especially around the way people act toward AI. He pointed out that various forms of "AI" have long been used in customer service. If you call your cable provider or bank, your call is routed using an interactive voice response (IVR) system.

"That [IVR] experience is unnatural and frustrating, and there's no confusion about it being a bot," Vandegrift said. "But when we get to a point where the AI experience is so natural you never realize it's a bot, I don't think people will mind."

He sees how customers may feel tricked when they first realize they've been interacting with a chatbot instead of a human. But after the initial reaction, he believes sentiment will move the other way and people may prefer chatbots over live agents. After all, interactions with a human can be pretty painful as well, he added.

Don't con the consumer

That said, consumers expect a level of trust, and when they find out a company is using a machine to interact with them, they could feel betrayed and may even turn against the brand, Vandegrift acknowledged, adding that companies "may want to be transparent."

I asked Conversica's Gerber if he sees an ethical issue with giving customers the false impression they're interacting with a real person. "Perhaps there is an ethical question there, but people are getting the information they need. And in the end, they do get to talk to a person," Gerber said.

Further justifying it, he added that virtual assistants sometimes do a better job than live agents. One customer who didn't receive a prompt follow-up phone call from a sales manager recently suggested perhaps the virtual assistant he interacted with should be the manager, according to Gerber. "There could be a time when people prefer talking to a bot because they'll get what they need," he said.

To be fair, companies aren't trying to use chatbots to fool customers.

Consider ATMs. When they emerged, many balked at the idea of feeding their money into a machine and preferred to interact with a bank teller. Now, ATMs are the norm, and, unlike tellers, they're available day and night to handle transactions.

Susan Zaney, former vice president of marketing at KnowledgeVision, an online business presentation platform provider, relies on Conversica's chatbot technology to follow up on leads. Their virtual assistant, which they named "Caitlyn Kelly," interacts with customers using natural language in such a way that customers apologize to Caitlyn for not getting back to her right away, Zaney said. "We even created a phone line and voicemail for her because we had so many people call asking for Caitlyn Kelly," she noted. "Someone suggested we create a LinkedIn profile for her -- but that's where I drew the line." (Messages left for Caitlyn get a response from a live sales representative, she said.)

Avatars in 3D

While many companies take a similar tack -- naming their chatbots and presenting them as human service agents -- others portray chatbots as digital avatars, and it's clear those aren't real people, explained Jordi Torras, founder and CEO of Inbenta, which uses its own natural language processing (NLP) software to provide AI enterprise search and chatbots. "We have cool 3D avatars, and some of our customers name them, but they look digital; they aren't pretending they are human," Torras said. He added that chatbots should identify as such when asked, and he doesn't think chatbots should be presented as people.

To be fair, companies aren't trying to use chatbot technology to fool customers or develop the type of personal relationships that live marketing and salespeople create. A chatbot's job is to act as an intermediary to provide assistance or hand customers off to the appropriate person, Zaney said. "If you've wanted a meeting with us, and we ignored you before," she explained, "getting a call after an interaction with Caitlyn means you got what you needed."

And over time, as interacting with bots becomes the norm, consumers will stop making the human assumption and learn to distance themselves emotionally from computer-generated conversation platforms, Inbenta's Torras said.

That's not to say chatbots will become the default method for all customer follow ups. Even Inbenta still relies on live sales reps to follow up on business inquiries. As one of the company's business development managers who followed up on my recent inquiry said, "while the bots are very useful, some things still are well with a human touch."

Whether or not you like the idea of interacting with a chatbot, it's a vast improvement over the frustrating IVR systems we've become accustomed to. Inbenta claims its NLP technology solves 90-97% of customer service questions. When the technology can't answer a question, the system automatically connects customers and the transcript of its interaction with the customer to a live agent, Torras said.

And the technology is only going to get better in years to come. "Conversations with [chatbots] will become even more natural," he predicted, "and they will be able to solve problems in a way that is superior to what people can do for you."

Gartner expects all business applications to include AI by 2020, with technologies that include natural language capabilities, deep neural networks and conversational capabilities. So it's entirely likely that you will carry on email conversations over the next few years with sales assistants who aren't people at all and never be the wiser. Legally, that's not a problem. Ethically? You be the judge.

Article 10 of 10

Next Steps

Chatbots deliver intelligent insights to your fingertips

AI chatbots coming to a business app near you

Salesforce uses bots to bring customer service to the next level

Customer service chatbots reduce buyer's remorse

Dig Deeper on AI for customer service

Get More Business Information

Access to all of our back issues View All