customer service is in the hands of artificial intelligence: the difficult balance that chatbots must strike

If they are too intelligent, the consumer will receive them with suspicion, but if they are too inept, they will not like the experience either.

More and more companies are putting customer service functions in the hands of artificial intelligence. In fact, the overload on their customer service departments during the pandemic, when many consumers stayed home, shopped more online, and talked a lot more with remote customer services. Faced with work overload, many companies gave more weight to their bots and their other artificial intelligence (AI)-based services.

But leaving customer service, either mainly or in part, to artificial intelligence requires many new factors to be taken into account. There is the question of demographics and whether your consumers will be tech-savvy enough to interact with a chatbot and not a real person. There is also the issue of comfort. Are your buyers comfortable talking to a robot?

Here the questions of age are not necessarily so relevant, since comfort or discomfort can be marked by many factors. The very nature of technology makes some consumers feel repulsed. In addition, you also have to take into account how you react to what those chatbots are like.

As The Wall Street Journal has just pointed out, based on data from a set of studies conducted by researchers at Stanford University, companies face something of a double-edged sword. They can’t make their chatbots too smart, because that will scare consumers away.

At the same time, however, they can’t make them too basic either, because if they do, users won’t put in the effort to use them. They must find the balance, the middle ground, but that is not always easy.

What the researchers discovered
In two of the studies, participants had to interact with chatbots, which were described to them in different terms that appealed to different degrees of “personality” sophistication. They can be from “trained professionals” to nursery children or teenagers with little experience, going through other profiles.

Subsequently, the study participants had to explain whether they would use that agent again. The data made it clear that, quite conspicuously, users gave higher scores to those chatbots that were not described in a way that made them seem very safe, but did generate some warmth. In other words, it was preferred that the chatbot have the personality of a young person who is just starting out than that of an executive who knows everything.

In a third study, the chatbots that generated the most rejection were studied. In general, the chances of someone using a chatbot decrease if the chatbot is perceived to be incompetent.

What should chatbots be like then to work for companies? Researchers believe they should be presented in a friendly manner, but not overwhelming promises of efficiency. If companies make big promises, consumers will be ruthless if the chatbot doesn’t get it right, but they’ll accept the situation if the chatbot somehow recognizes that it’s learning.

Leave a Comment