a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: The Moral Dilemma of AI in Customer Engagement and feedback

Blog: The Moral Dilemma of AI in Customer Engagement and feedback


Artificial Intelligence(AI) is not just a thing we hear about in science fiction books anymore. In the last 10 years computer science in the domain of artificial intelligence has made so much progress that we are currently living in a world where technologies all around us consists of the usage of AI. We use AI every day. It’s not only on our smartphones, laptops and cars, it’s everywhere.

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between language are some of it’s common usage. For the last few years, AI has entered the consciousness of every industry. Businesses of all shapes and sizes are considering artificial intelligence to solve real business problems. This article is going to focus on the ethical aspects of how some of these businesses are using AI technology specifically in the domain of customer engagement such as getting feedbacks in help lines.

One of the first things we would like to get into more depth of when it comes to the moral dilemma of using AI in companies, is if the customer or the end-user experiencing the AI exposure of this technology even know that what they are interacting with is not even human.

It is unethical for companies to promise human level engagement and then replace it through means of automation and by the usage of AI bots. If we are trying to go into the depth of it more, it should be considered unethical because the service of a good different that what a business promised is morally wrong.

However, if the customer are actually talking to AI and are satisfied, even though they did not know that they were not talking to a human, should that be consider unethical? If the customers are tricked into talking to AI thinking of them as humans but receive more attentive and personal care to get solutions to their problems, is that better and can slide through the moral lines to be considered ethical? Can AIs in some cases provide more attentive care than humans for engagement and giving feedback?

Artificial intelligence can monitor and analyze millions of data points, finding patterns and discrepancies across engagement scores and trends over time, as well as disparate sources of data. Further, the system learns over time from the customer interactions making more personalized and accurate predictions, so customers can have more effective and personal conversations about the future, rather than the past.

During the continuation of this article we are going to assume that the cost of paying a human wages and training them is way less than training an AI to get the job done and it is not an unfair assumption with the rise of cheap cloud services. According to one report, Chatbots currently account for business cost savings of $20 million globally . Findings from analysis firm Juniper Research show that Chatbots are expected to trim business costs by more than $8 billion per year by 2022 . This would in turn mean that the company can now choose to serve more number of customers a given time and be more efficient by using AIs over humans to do the job because the costs would go down.

Now to the real question, is that really the best for both parties in the terms of the company and the end user? Just because chatbots will save money for the business and in some cases provide a better service than humans would that be okay to replace human interaction with AI. We really don’t know and this article aims to answer that and find the best ethical and moral decision of implementing AI technology in customer engagement and try to find other key stakeholders if there is any through analysis.

According to a study conducted by Capgemini Research Institute an increasing number of consumers are what we call “AI-aware:” close to three-quarters (73%) say they are aware of having interactions enabled by artificial intelligence. Examples include chatbots for customer service, facial recognition for consumer identification, voice conversation via a smart speaker or a smartphone, etc. The study even went more 69% of these AI-aware consumers were satisfied with their AI-enabled interactions. Figure A visualizes the benefits experience by the customers who knew they were talking to AI and why they liked it better than having to deal with a human.

Let us now try to define this problem using ethical frameworks. The stakeholders who are the subject of this evaluation actually more than just two. They are: the company that is implementing AI to achieve these specific talks, the end users who are suppose to be receiving the service from these companies and lastly the people who would be doing this job provided it was not done by AI. Therefore, there are three groups of stakeholders. The framework that fits this dilemma is the Utilitarian approach who defines the most ethical action to be the one that provides the greatest amount of good for the largest number of people — or, in more dire circumstances, the least amount of harm.

The Utilitarian approach can be only achieved after we have a more detailed overview of the problem so let’s get to exactly that. Capgemini Research Institute’s study also talks about how consumers want to know when they are talking to an AI-enabled system and not a human. They found that two thirds of consumers (66%) would like to be made aware when companies are enabling interactions via AI. This is especially true for the Financial Services sector, where over 71% of consumers would like to be informed. “I think you always need to be told,” said a US focus group participant.

A German focus group participant added, “Organizations should be clear whether it’s a computer or a real person that we are interacting with. Otherwise there’s no trust if you think you were speaking to a real person the whole time or, if you found out later, then you feel foolish.” What does this mean? This means two out of three people from the study did not like the fact that they did not know if they were talking to a human or AI. They want to know when talking to a non-human and not with a human because if not they feel like they are being tricked. Their interaction might differ completely when talking to a AI instead of a human. Therefore, this should be put down in the cons column for using AI and not telling end users that they are engaging with one as the end-user stakeholders clearly do not like it.

Another study analyzed how communication changes when people communicate with artificial intelligence such as in a chatbot as opposed to with another human. Their method of study included the comparison of 100 instant messaging conversations with another 100 conversations with a chatbot. They measured different variables such as: words per message, words per conversation, messages per conversation, word uniqueness, and use of profanity, shorthand, and emoticons.

A Multivariate analysis of variance indicated that people communicated with the chatbot for longer durations (but with shorter messages) than they did with another human. Additionally, human–chatbot communication lacked much of the richness of vocabulary found in conversations among people, and exhibited greater profanity. What does this mean? Does that mean the companies are spending more time in these engagements with users because of using AI instead of humans? If that is rightfully so, then that would mean the assumption that we were making that because of using AI we can now serve more end users at a time might not be actually true. Yes, AI could increase the quantity of number of end users served but the quality matters as well which seems to be dropping. The quantity increasing might be a pro, but the quality dropping is a big con as well in terms of evaluating from the companies’ stand point as a key stakeholder.

Let us now to narrow down upon the usage of emotions in these customer engagement situations and if humans are more suited for the job because of it since AI might not be able to detect or mimic emotions. This is important to understand as we are trying to understand more about why the quality of the interactions might have gotten down when users were talking to AI instead of humans. If emotions are an important factor and AI are not able to show them and is affecting engagement then its a con for the users as well. According to research regarding potential differences between Computer-Mediated-Communication and Face-to-Face interaction, with respect to the communication of emotion , it was concluded that emotions are abundant in Computer-Mediated-Communication and is definitely a key factor. AI might not be sensitive to the response of urgencies or expression of emotions such as desperation or anger which means when you are talking to a chatbot it might not cater to your needs judging from your emotions which can be crucial to provide the right response for a specific service.

There are also studies that argue that the aim of companies if they choose to go for AI and design chatbots should be: to build tools that help people, facilitate their work, and their interaction with computers using natural language; but not to replace the human role totally, or imitate human conversation perfectly. What does this mean? Judging from all the other studies we talked about in this article, maybe they are trying to highlight that just using AI to replace humans completely might not work for customer engagement as there is a quality aspect of the services that relates to feelings and emotions that they just do not have yet from the companies stand points to ensure proper results. There also can be unintended results such as the AI being trained that their success is to end as many interactions with customers at a time to save more resources and they learn by mistake that a way to do that is to decrease quality.

Another framework that can be utilized to evaluate this ethical dilemma would be ‘The Fairness or Justice Approach’, the idea that all equals should be treated equally. Today we use this idea to say that ethical actions treat all human beings equally-or if unequally, then fairly based on some standard that is defensible. We pay people more based on their harder work or the greater amount that they contribute to an organization, and say that is fair. But how can we establish equality between automation and humans to compare them? What would be the fairness approach lead to us here? How can we evaluate AI to be treated fairly? Should it even be a key stakeholder? How can we ensure equal treatment to the company and the end users as both of their desired outcome is very different from each other causing contradictions?

Now, that we have done our analysis on the problem, let us dig deeper to come up with different alternatives to ensure efficient customer engagement while also making sure the greatest good is achieved by all parties by using the Utilitarian framework. Some of the possible solutions could be:

1) We could hire more humans and train them to talk to these customers and eliminate AI altogether. However, that relates to cost as we are making the assumption throughout this article that the cost of having an AI tool to talk to your customers is way cheaper than having customers. We cannot simply just go with decision as the companies are being subjected to a lesser desired outcome than the customers and the people as we are reducing efficiency and increasing costs. This cannot be the greatest good achieved.

2) We can use AI for all sort of customer engagement and just tell customers that this company is using AI technology so that there is no change of deceit and certain standards of how good the AI needs to be to produce desired outcomes for the customers are set forth. This could be a possible solution, but then you might run into the problem that the people who do not like AI as a solution and need more user specific personal attention will be hampered. People who were originally going to deal with the user engagement instead of AI are also losing their jobs. This all also therefore cannot be the greatest good achieved.

3) We can have a mixture of AI and humans for user engagement where we hit a sweet spot for automation and personal care. Customers will be made aware when they are talking to AI and when they are talking to a human so there is no deceit. People are also given the change to skip the automation process and allowed to speak just specifically to the human if they choose so. Companies should also have certain standards to make sure the automation and AI component is good enough to provide service. This way the company can also automate and save resources while also having a human component where they are kept employed and the customers are also provided best care for end users. This is definitely the best desired outcome as it serves the greatest amount of good throughout all parties.

Summing up, this article would like to conclude by saying that as much as AI is a new and innovative way of reaching efficiency, the most ethical right approach of implementing in companies for customer engagement should have transparency about who they are talking to through a chatbox for instance.

There a mixture of both artificial intelligence and human intelligence in setups. There should also be specific standards that the AI should be held against that is in fact good enough to provide service just as people who are hired for jobs such as for providing feedback or help for the use of a service are made to go through training and a selection process such as interviewing. Customers should be given the chance to choose humans to talk to if they choose to do so as well. These conclusions have been drawn after a detailed analysis of identifying key stakeholders, finding alternatives and the usage of the utilitarian approach and would serve to result in the greatest good achieved by all parties involved.

Sources:
1. The Secret to Winning Customers’ Hearts with AI Add Human Intelligence: Capgemini Research Institute — 2018:https://www.capgemini.com/wp-content/uploads/2018/07/AI-in-CX-Report_Digital.pdf

2. Hill, Jennifer, W. Randolph Ford, and Ingrid G. Farreras. “Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations.” Computers in Human Behavior 49 (2015): 245–250. https://www.sciencedirect.com/science/article/pii/S0747563215001247

3. Derks, Daantje, Agneta H. Fischer, and Arjan ER Bos. “The role of emotion in computer-mediated communication: A review.” Computers in human behavior 24.3 (2008): 766–785. https://doi.org/10.1016/j.chb.2007.04.004

4. Shawar, Bayan Abu, and Eric Atwell. “Chatbots: are they really useful?.” Ldv forum. Vol. 22. №1. 2007. http://www.jlcl.org/2007_Heft1/Bayan_Abu-Shawar_and_Eric_Atwell.pdf

5. Chatbots expected to cut business costs by $8 billion by 2022https://www.cnbc.com/2017/05/09/chatbots-expected-to-cut-business-costs-by-8-billion-by-2022.html

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter