Consumer Attitudes Mixed on AI in Customer Service—Pega Research

While people are increasingly comfortable using AI, they prefer human interaction for some interactions and have concerns about the technology’s propensity to behave amorally.

(Image credit: Shutterstock.)

Consumers have confidence in artificial intelligence (AI) as a tool for transforming their customer experiences but have growing concerns over AI’s increased prevalence in other disciplines, according to new research from Pegasystems Inc.(Pega, Cambridge, Mass.), a low-code platform provider serving major enterprises globally. The study, conducted by research firm Savanta (London) and presented at PegaWorld iNspire, the vendor’s annual conference in Las Vegas, surveyed 5,000 consumers worldwide on their views around AI, its continued evolution, and the ways in which they interact with the technology. The results included responses from the United States, the United Kingdom, France, Australia, and Japan.

The study found a general acceptance of AI in areas relating to customer experience, with two thirds (67%) of respondents agreeing AI has the potential to improve the customer service of businesses they interact with, and more than half (54%) saying companies using AI will be more likely to offer better benefits to customers compared to businesses that do not. Nearly half of respondents (47%) indicated they are comfortable interacting with well-tested AI services from businesses, and two thirds (64%) said they expect most major departments within organizations will be run using AI and automation within the next ten years.

Despite this, the research also highlighted a major lack of trust in AI in several areas, including:

A preference for people: Despite demonstrating an appetite for the use of AI in customer engagement, 71 percent of respondents said they still prefer to interact with a human being than the AI itself. The vast majority (68%) also said they would trust a human bank employee to make an objective, unbiased decision about whether to give a bank loan more than an AI solution, while the overwhelming majority (74%) admitted they would trust a medical diagnosis from a human doctor than one made by AI with a better track record of being right, but which could not demonstrate or explain how it arrived at its decision. Meanwhile, despite 51 percent saying they think an autonomous car is capable of making a more ethical decision than a human driver might in avoiding a car crash situation, 65 percent agreed that AI should not be allowed to overrule a human driver in such a situation.

The rise of the machines: The vast majority (86%) of respondents said they feel AI is capable of evolving itself to behave amorally, with more than a quarter (27%) saying they think this has already happened. Almost half (48%) said it was likely that generative AI will eventually become sentient or self-conscious. Almost a third (30%) said they were concerned about AI enslaving humanity—a small increase from the 27 percent who said the same in a similar studyconducted in 2019. Only 16 percent said they had no concerns over AI whatsoever.

Reality check: While the study points to an increased general awareness of AI as a tool for everyday use—more than half of respondents said they think the technology is now responsible for producing more than half of all photos (55%) and videos (55%) they consume—concerns are building over how challenging it is to tell what’s real from what’s fake, with the majority indicating some level of difficulty in determining whether content has been generated by humans or AI. Nearly two-thirds (63%) indicated they couldn’t tell whether a long-form article had been generated by AI or a human, while a similar number said the same about photos (59%) and videos (58%). More than half (56%) said it was difficult to tell if AI had generated TV reports they consume.

“As applications like Midjourney and ChatGPT bring AI to the masses, it’s no surprise that we’re seeing a degree of conflict,” asserts Rob Walker, general manager, 1:1 customer engagement, Pega. “Let’s not forget, many people are already accepting the benefits this technology can bring; after all, asking Alexa or Siri a question is nothing new for most consumers. However, it’s also perhaps inevitable that as the spotlight on this technology intensifies, so does the level of fear and uncertainty around some of the more science-fiction influenced ‘doomsday scenarios’ surrounding it. As these concerns grow, the need for those organizations to demonstrate greater transparency in the outcomes these AI systems produce, and perform ethical bias tests to check how it ‘behaves’ at all times becomes clear.

Recommendation: Use AI to Augment Human Skills

“What we’re seeing is that while people seem more comfortable than ever using AI, they’d rather hold it at arm’s length when it comes to dealing with big, impactful decisions,” Walker stresses. “Consumers are still expressing a strong desire to retain human interactions as a key part of the way they interact with organizations. What this tells us is that people are important, and that consumers want human beings in the loop at all times. The best way to embrace technologies like AI is to use them to supplement and augment existing human skills. Businesses that can do this effectively will be able to reap the benefits, keep their customers happy, and maximize their productivity.”

The Age of the Autonomous Enterprise is Coming—Pega Research 

Anthony R. O’Donnell // Anthony O'Donnell is Executive Editor of Insurance Innovation Reporter. For nearly two decades, he has been an observer and commentator on the use of information technology in the insurance industry, following industry trends and writing about the use of IT across all sectors of the insurance industry. He can be reached at AnthODonnell@IIReporter.com or (503) 936-2803.

Leave a Comment

(required)