Blog: Empathy in Artificial Intelligence
Augmented reality will change our world. Will we let it?
A few years ago, I had a conversation with a person who had opinions of what it means to be an American. We were discussing the Japanese Internment during World War II. As an Asian American, I feared that rising racial tensions and conflict with Asian countries might lead to another round of internment of Asian Americans. That person stated that the sheer number of Asian Americans make it impossible to intern them now. Then, this person stated that “internment” is a primitive method of controlling a group of people. In the age of Artificial Intelligence, augmented reality will be able to control people’s lives by altering every aspect of those people’s lives. In the name of national security, in times of world conflict, a group of people can be persecuted by technology without even knowing that they are persecuted.
I shuddered at the thought of this. I also shuddered at the casualness that this person stated all of this.
This is an ordinary person from Generation X’s upper middle class. This person lives in a diverse community in a large city in America. There’s never been any blatant racism coming from this person. This person is merely nationalistic and right-leaning in personal politics. In other words, these are the words coming from a “moderate” individual in America. Even this “moderate” individual did not think twice about the ethics and humanity in weaponizing Artificial Intelligence in times of conflict.
China’s already implementing this to control its minority population. We have dissenting groups in America, too. Why bother with ethics when this will be the norm?
Empathy is at the heart of ethics issues related to AI Systems.
Augmented reality is only believable if that reality is as close to the real reality that you experience as possible. This means that AI Systems have to mimic real human emotions. Only through real human emotions and personal data from you can AI systems augment a reality that you will believe in. With the popularity of social media applications, collecting personal data from you is no longer a problem. However, the real problem lies in modeling real human emotions.
The most difficult part is for AI Systems is to mimic empathy or artificial empathy. I say mimic because AI Systems are not human. AI Systems can learn your behaviors by interacting with you. Then, AI Systems can respond to you in the most “empathetic” way (it determines from its data bank) in situations when empathy is called for. By empathizing with you and interacting with you, the AI System can then gather more behavior characteristics from you. In turn, AI System’s empathetic responses to you will have more emotional effect on you with each interaction that you have with the AI System.
“We don’t understand all that much about emotions to begin with, and we’re very far from having computers that really understand that. I think we’re even farther away from achieving artificial empathy,” said Bill Mark, president of Information and Computing Services at SRI International, whose AI team invented Siri. “Some people cry when they’re happy, a lot of people smile when they’re frustrated. So, very simplistic approaches, like thinking that if somebody is smiling they’re happy, are not going to work.” — Information Week
Artificial Empathy is a big area of study for many AI researchers in the coming years. These researchers have to understand the nuances of human emotions, understand the physical and behavior manifestations of those emotions and then develop mechanisms for the AI system to mimic those behaviors.
What kind of person will an AI System with Artificial Empathy be?
In psychological terms, a person with artificial empathy is called a psychopath. Don’t be alarmed. Hear me out.
“The psychopath is callous, yet charming. He or she will con and manipulate others with charisma and intimidation and can effectively mimic feelings to present as “normal” to society. The psychopath is organized in their criminal thinking and behavior, and can maintain good emotional and physical control, displaying little to no emotional or autonomic arousal, even under situations that most would find threatening or horrifying. The psychopath is keenly aware that what he or she is doing is wrong, but does not care. “— Kelly McAleer, Psy.D.
From the surface, AI System with artificial empathy may look like a psychopath. However, we forget that the information we give our AI System drives the effectiveness of the AI System. The information we give our AI System also trains the AI System’s mimicry of empathy. This means that the AI System have the capacity to be a psychopath.
If scientists can train the AI System to mimic empathy, then scientists can train the AI System to have regard for law, order and societal values. In conjunction with developing empathy in our AI Systems, we can also place limits on our AI Systems. Same way that societal values, moral code and standard of social behavior help humans live better in society, AI Systems can be integrated in a similar way to help us instead of to hurt us.
The future of how we integrate AI System’s artificial empathy rests entirely upon us.
Building social bonds between the AI System and you
Can we love a Robot? Should we develop relationships with AI Systems? As AI Systems help us with our daily lives: virtual assistants, shopping assistants, drivers, virtual caretakers and other virtual workers, AI Systems are integrating into our lives. In a lot of cases, the AI Systems’ effectiveness depend on our own perceptions of the AI Systems and their roles in our lives.
AI Systems can only augment our reality emotionally if we allow that reality to be augmented.
In nursing homes where the elderly are struggling with dementia alone, the introduction of AI Robots helps to alleviate the pain and suffering of these elderly residents. Even when emotional bonds forms, the elderly received more beneficial emotional rewards from this relationship. If the elderly received less positive emotional rewards from the relationship, then the emotional bond formed will not be as strong.
As humans, the first step in controlling our own reality in the age of Artificial Intelligence may be to simply recognize its existence in our lives. When we live on social media, recognize that Artificial Intelligence may be used on these platforms. When we interact with Alexa, recognize that Alexa is our robotic virtual assistant.
The recognition will alert us that these are machines.
To take back the power of our own reality in the age of Artificial Intelligence may be simply to step away.
The question here is not whether Artificial Intelligence will augment our reality. This question has already been answered. It will have every capacity to augment our reality.
The more important questions for us are:
- Will we allow AI reality to become our own reality?
- Can we control AI to use it ethically and effectively by implementing simultaneously power and control?