a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Empathetic Computing Via AI, Your Driverless Car Is Your Buddy, Kind Of

Blog: Empathetic Computing Via AI, Your Driverless Car Is Your Buddy, Kind Of


Dr. Lance B. Eliot, AI Insider

Emotion detection and emotion emission are newer AI advances

I’d wager that most of us express our emotions, even though I have met the occasional stoic non-pulsed person from time-to-time. Furthermore, we express our emotions at times as a response to someone else.

The other person might tell us something in an unemotional way, and you might respond in an emotional way. Or, the other person might tell you something in an emotional way, and you might respond in an emotional way.

Emotions Spark Emotions, Or So We Expect

Thus, it can be that emotion begets emotion, stoking it from another person. That doesn’t have to be the case and you can be conversing with someone on a seemingly unemotional basis and then opt to suddenly become emotional. There doesn’t necessarily need to be a trigger by the other person. Nor does it necessarily need to be a tit-for-tat.

That being said, usually when a person is emotional toward you, the odds are they will likely be expecting an emotional laden response in return.

You might be familiar with the words of the famous holistic theorist Alfred Adler, a psychiatrist and philosopher that lived in the late 1800s and the early 1900s, in which he said that we should see with the eyes of another, hear with the ears of another, and feel with the heart of another.

The first two elements, the eyes and the ears, presumably can be done without any emotional attachment involved, if you consider the eyes as merely a collector of visual images and the ears as collectors of abstract sounds and noises. The third element, involving the heart, and the accompanying aspects of feelings, pushes us squarely into the realm of emotions.

Of course, I don’t believe that Adler was suggesting that the eyes and ears are devoid of emotion, and rather the opposite that you can best gain a sense of another person by experiencing the emotion that they express and inures by what they see, and by what they hear, along with matters of the heart.

I bring up Adler’s quote because there are many that assert you cannot really understand and be aligned with another person if you don’t walk in their emotional shoes.

You don’t necessarily need to exhibit the same exact emotions, but you ought to at least have some emotions that come forth and be able to understand and comprehend their emotions. If the other person is crying in despair, it does not mean you can only respond by crying in despair too. Instead, perhaps you break out into a wild laughter and this might spark the other person out of their despair and join you in the laughter. It’s not a simple mating of one emotion echoed by the same emotion in the other.

Wearing Emotional Shoes, The Empath

Let’s then postulate a simple model about emotion.

One aspect is the ability to detect emotion of others.

The other aspect is for you to emit emotion.

So, you are talking with someone, and you detect their emotion, and you might then respond with emotion.

Being empathetic is considered a capability of being able to exhibit a high degree of understanding about other people’s emotions, both their exhibited and hidden emotions. Per Adler, this implies that you need to be like a sponge and soak in the other person’s emotions. Only once you’ve gotten immersed in those emotions, only then can you truly be empathetic or an empath, some would say.

Can you be empathetic without also exhibiting emotion? In other words, can you do a tremendous job of detecting the emotion of others, and yet be like my friend in terms of never emitting emotions yourself?

That’s an age-old question and takes us down a bit of a rabbit hole. Some claim that if you don’t emit emotion, you can never prove that you felt the emotion of another, and nor can you then get on the same plain or mental emotional level as the other.

One danger that some suggest can occur if you are emitting emotion is that you might get caught up in an emotion contagion. That’s when you detect the emotion of another and in an almost autonomic way you immediately exhibit that same emotion. You can see this sometimes in action. Suppose you have a room of close friends and one suddenly starts crying, others can also start to cry, even though maybe they don’t exactly know why the other people are crying. It becomes an infectious emotion. Crying can be like that. Laughing can be like that.

There is ongoing research trying to figure out how the brain incorporates emotions. Can we somehow separate out a portion of the brain that is solely about emotions and parse it away from the logical side of the brain? Or, are emotions and logic interwoven in the neurons and neuronal connections such that they are not separable. In spite of Adler’s indication about the heart, modern day science would say the physical heart has nothing to do with emotions and it’s all in your head. The brain and its currently unknown manner of how it exactly functions is nonetheless the engine that manifests emotion for us.

Sometimes empathy is coupled with the word affective.

This is usually done to clarify that the type of empathy has to do with emotions, since presumably you could have other kinds of empathy. For example, some assert that cognitive empathy is being able to detect another person’s mental state, which might or might not be infused with emotion. Herein, I’m going to refer to empathy as affective empathy, which I am intending to suggest is emotional empathy, namely empathy shaped around emotions.

Emotion Emissions Is The Focus Here

Recall that I earlier herein had said that we should consider the emotional empathy or now I’ll say affective empathy as consisting of two distinct constructs, the act of emotion recognition, and the act of emotion emission.

I want to mainly explore the emotion emission aspects herein. The notion is that we might want to build AI that can recognize emotion, along with being able to exhibit emotion. That’s right, I’m suggesting that the AI would emit emotion.

This seems contrary to what we consider AI to be. Most people would assert that AI is supposed to be like Mr. Spock, or more properly another fictional character in the Star Trek series known as Data. Data was a robot of a futuristic nature that was continually trying to grasp what human emotions are all about and craved that someday “it” would have emotions too.

There might be some handy reasons to have the AI exhibit emotion, which I’ll be covering shortly. First, let’s do a quick look at what do we mean by the notion of emotions.

When referring to emotions, there are lots of varied definitions of what kinds of emotions exist. Some try to say that similar to how colors have a base set and you can then mix-and-match those base colors to render additional colors, so the same applies to emotions. They assert that there are some fundamental emotions and we then mix-and-match those to get other emotions. But, there is much disagreement about what are the core or fundamental emotions and it’s generally an unsettled debate.

One viewpoint has been that there are six core emotions:

  • Anger
  • Disgust
  • Fear
  • Happiness
  • Sadness
  • Surprise

I’m guessing that if you closely consider those six, you’ll maybe right away start to question how those six are the core. Aren’t there other emotions that could also be considered core? How would those six be combined to make all of the other seemingly emotions that we have? And so on. This highlights my point about there being quite a debate on this matter.

Some claim that these emotions are also to be considered core:

  • Amusement
  • Awe
  • Contentment
  • Desire
  • Embarrassment
  • Pain
  • Relief
  • Sympathy

Some further claim these are also considered core:

  • Boredom
  • Confusion
  • Interest
  • Pride
  • Shame
  • Contempt
  • Interest
  • Relief
  • Triumph

For purposes herein, we’ll go ahead and assume that any of those aforementioned emotions are fair game as emotional states. There’s no need to belabor the point just now.

Affective empathetic computing or also known as affective empathetic AI is the aspect of trying to get a machine to recognize emotions in others, which has been the mainstay so far, and we ought to also add that it includes the emission of emotions by the machine.

That last addition is a bit controversial.

The first part, recognizing the emotions of others, seems to have a clear-cut use case. If the AI can figure out that you are crying, for example, it might be able to adjust whatever interaction you are having with the AI to take into account that you are indeed crying.

Suppose you are crying hysterically. This likely implies that no matter what the AI system might be saying to you, some or maybe even none of what you are being told might register with you. You could be so emotionally overwhelmed that you aren’t making any sense of what the AI is telling you. I’m sure you’ve seen people that get themselves caught up in a crying fit, and it often is impossible to try and ferret out why, and nor get them into a useful conversation.

That’s why it would be handy for AI to be able to recognize emotion in humans. Doing so would allow the AI to be able to adjust whatever actions or efforts the AI is doing, based on the perceived emotional state of the human. Maybe the AI would be better off not trying to offer logical explanation to someone hysterically crying and wait until the crying subsides.

Empathetic Emotion And AI Self-Driving Cars

What does this have to do with AI self-driving autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless cars. The use of emotional recognition for AI self-driving cars is an emerging area of interest and will likely be crucial for interactions between the AI and human drivers and passengers (and others). I would also assert that affective empathetic AI or computing involving emotional emissions is vital too.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

Returning to the topic of affective empathetic computing or AI, I’m going to primarily focus on emotions emissions and less so on emotional recognition herein.

Let’s assume that we’ve been able to get an AI system to do a pretty good job of detecting emotions of others. This is not so easy, and I don’t want to imply it is. Nonetheless, I’d bet it is something that we’ll gradually be able to do a better and better job of having the AI do.

Should the AI also exhibit emotion?

As already mentioned, some believe that the AI should be like Mr. Spock or Data and never exhibit emotion. Like they say, it should be just the facts, and only the facts, all of the time.

One good reason to not have the AI showcase emotion is because “it doesn’t mean it.” Some would argue that it is a false front to have AI seem to cry, or laugh, or get angry, and so on. There is no there, there, in the sense that it’s not as though the AI is indeed actually happy or sad. The emission of emotions would be no different than the AI emitting the numbers 1, 2, and 3. It is simply programmed in a manner to exhibit what we humans consider to be emotions.

Emotions emission would be a con. It would be a scam.

Besides the criticism that the AI doesn’t mean it, there is also the concern that it implies to the person receiving the emotion emission that the AI does mean it. This falsely adds to the anthropomorphizing of the AI. If a person begins to believe that the AI is “real” in terms of having human-like characteristics, the person might ascribe abilities to the AI that it doesn’t have. This could get the person into a dire state since they are making assumptions that could backfire.

Suppose a human is a passenger in a true Level 5 AI self-driving car. The person is giving commands to the AI system as to where the person wants to be driven. Rather than simplistic one-word commands, let’s assume the AI is using a more fluent and fluid Natural Language Processing (NLP) capability. This allows some dialogue with the human occupant, akin to what a Siri or Alexa might do, though we soon will have much greater NLP than the stuff we experience today.

The person says that they’ve had a rough day. Troubles at work. Troubles at home. Troubles everywhere. In terms of where to drive, the person tells the AI that it might as well drive him to the pier and drive off the edge of it.

What should the AI do?

If this was a ridesharing service and the driver was a human, what would the human driver do?

I doubt that the human driver would dutifully start the engine and drive to the end of the pier. Presumably, the human driver would at least ignore the suggestion or request. Better still, there might be some affective empathy expressed. The driver, sensing the distraught emotional state of the passenger, might offer a shoulder to cry on (not literally!), and engage in a dialogue about how bad the person’s day is and whether there is someplace to drive the person that might cheer them up.

It’s conceivable that the human driver might try to lighten the mood. Maybe the human driver tells the passenger that life is worth living for. He might tell the passenger that in his own life, he’d had some really down periods, and in fact his parents just recently passed away. The driver and the passenger now commiserate together. The passenger begins to tear up. The driver begins to tear up. They share a moment of togetherness, both of them reflecting on the unfairness of life.

Is that what the AI should do?

I realize you can quibble with my story about the human driver and point out that there are a myriad of ways in which the human driver might respond to the passenger. I admit that, but I’d also like to point out that my scenario is pretty realistic. I know this because I had a ridesharing driver tell me a similar story the other day about the passenger that had just been in his car, before I got into his car. I believe the story he told me to be true and it certainly seems reasonably realistic.

Back to my question, would we want the AI to do the same thing that the human driver did? This would consist of the AI attempting to be affectively empathetic and besides detecting the state of emotion of the passenger, also emitting emotion as paired up for the situation. In this case, the AI would presumably “cry” or do the equivalent of whatever we’ve setup the AI to showcase, creating that moment of bonding that the human driver had done with the distraught passenger.

As an aside, if you are wondering how would the AI of a self-driving car do the equivalent of “crying,” which it is not going to be a robotic head and body sitting in the driver’s seat (quite unlikely) and nor have liquid tear ducts embedded into the robotic head, the easy answer is that we might have a screen displaying a cartoonish mouth and eyes, shown on an LED display inside the AI self-driving car. The crying could consist of the cartoonish face having animated tear drops that go down the face.

You might debate whether that is the same as a human driver that has tears, and maybe it isn’t in the sense that the passenger might not be heart struck by the animated crying, but there is ongoing research that suggests that people do indeed react emotionally to such simple animated renderings.

The overarching theme is that the AI is emitting emotions.

Range Of Emotions Shown

I’ve used this example of crying, but we could have the AI appear to be laughing, or appear to be angry, or appear to have any of a number of emotions. I’m sure too that with added research, we’ll be able to get better and better at how to “best” display these emotions, attempting to get as realistic a response as feasible.

Some people would say this is outrageous and a complete distortion of human emotions. It undercuts the truthfulness of emotions, they would say. I don’t want to burst that bubble, but I would like to point out that actors do this same thing every day. Aren’t they “artificially” creating emotions to try and get us to respond? Seems to me that’s part of their normal job description.

Does an actor up on the big screen that is crying during a tender scene in the movie have to be actually experiencing that emotion and doing so as a real element of life? Or, can they be putting on the emotion as a pretend? I ask you how you would even know the difference. A really good actor can look utterly sincere in their crying or laughing or anger, and you would assume they must be “experiencing” it, and yet when you ask them how they did it, they might say that’s what they do.

If we are willing to put aside for the moment the aspect that the AI doesn’t mean it when it emits emotion, and if we agree that the emitting of emotion can potentially create a greater bond with a human, and if the bonding can aid the human, would we then be okay in terms of emitting the emotions?

This certainly takes us onto ethical matters about the nature of mankind and machines. For AI self-driving cars in particular, are we willing as a society to have the AI “pretend” to get emotional, assuming that it is being done for the betterment of mankind. Of course, there is going to be quite a debate about how we’ll be able to judge that the AI emotions emissions are indeed for the betterment of humans.

We’re going to have a difficult time trying to discern when the affective empathetic AI is for “good” versus for other purposes.

Healthy For Humans Or Maybe Not

Some would say that the affective empathetic AI could be a tremendous boon to the mental health of our society. If people are going to be riding in true Level 5 AI self-driving cars and perhaps doing a lot more traveling via cars because of the AI advances, this means that us humans will have lots of dedicated time with our AI of our AI self-driving cars.

The AI of your self-driving car could eventually “know” you better than other humans might know you, in the sense that with the vast amount of time you are spending inside the AI self-driving car, doing many journeys and more than you would as a driver, and with the AI collecting the data and interpreting it. This data includes the emotion recognition aspects and the emotion emission aspects.

Creepy? Scary? Maybe so. There is nothing about this that is beyond the expectation of where AI is heading. Notice that I am not suggesting that the AI is sentient. Nope. I am not going to get bogged down in that one. For those of you that might try to argue that the AI as I have described it would need to sentient, I don’t think so. What I have described could be done with pretty much today’s capability of AI.

Conclusion

Affective empathetic AI is a combination of emotion recognition and emotion emissions. Some say that we should skip the emotion emissions part of things. It’s bad, real bad. Others would say that if we are going to have AI systems interacting with humans, it will be important to interact in a manner that humans are most accustomed to, which includes that other beings have emotions (in this case, the AI, though I am not suggesting it is a “being” in any living manner).

I’ve not said much about how the AI is going to deliberate about emotions. The emotion recognition involves seeing a person and hearing a person, and then gauging their emotional state. Like I said about Adler, there is more to emotion detection than a merely visual images and sounds. The AI will need to interpret the images and sounds, using those in a programmed way or via some kind of Machine Learned manner to interpret them and ascertain what to next do.

Similarly, the AI needs to calculate when to best emit emotions. If it does so randomly, the human would certainly catch onto the “pretend” nature of the emotions. You could even say that if the AI offers emotion emissions of the wrong kind at the wrong time, it might enrage the human. Probably not the right way to proceed, though there are certainly circumstances wherein humans purposely desire to have someone else get enraged.

What about Adler’s indication that you need to get into the heart of the other person. That’s murky from an AI perspective. The question is whether or not the AI can skip the heart part and still come across as a seemingly emotionally astute entity that also expresses emotion.

I think that’s a pretty easy challenge, far less so than an intellect challenge of being able to exhibit intelligence (aka Turing Test). My answer is that yes, the AI will be able to convince people that it “understands” their emotion and that it appears to also experience and emit emotion.

Maybe not all of the people, and maybe not all of the time, but for a lot of the time and for a lot of the people.

I’ve altered Lincoln’s famous saying and omitted the word “fool” in terms of fooling people. Is the AI, which was developed by humans, which I mention so that you won’t believe that the AI just somehow concocted things on its own, is this human devised AI fooling people? And if so, is it wrong and should be banned? Or is it a good thing and will be a boon. Time will tell. Or maybe we should ask the affective empathetic AI and see what is says and does.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For my Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

Copyright © 2019 Dr. Lance B. Eliot

Source: Artificial Intelligence on Medium

(Visited 5 times, 1 visits today)
Post a Comment

Newsletter