Blog: Human-To-Machine Depersonalization Considerations and AI Systems: The Case of Autonomous Cars

By Lance Eliot, the AI Trends Insider

Is automation and in particular AI leading us toward a service society that depersonalizes us?

In a prior column, I had described how AI can provide deep personalization via Machine Learning, and readers responded by asking about the concerns expressed in the mass media about the possibility of advanced automation depersonalizing us as humans and what I had to say about those qualms, so thanks to your feedback I’m covering herein the depersonalization topic.

For my prior column on deep personalization via Machine Learning, see: https://www.aitrends.com/selfdrivingcars/ai-deep-personalization-the-case-of-ai-self-driving-cars/

Some pundits say yes, arguing that the human touch of providing services is becoming scarcer and scarcer, and eventually we’ll all be getting served by AI systems that treat us humans as though we are non-human. More and more we’ll see and experience that humans will lose their jobs to AI and be replaced by automation that is less costly, and notably less caring, eschewing us as individuals and abandoning personalized service. Those uncaring and heartless AI systems will simply stamp out whatever service is being sought by a human and there won’t be any soul in it, there won’t be any spark of humanity, it will be push-button automata only.

In my view, those pundits are seeing the glass as only half empty. They seem to not either notice or want to observe that the glass is also half full. Let me share with you some examples of what I mean.

Before I do so, please be aware that the word “depersonalization” can have a multitude of meanings. In the clinical or psychology field, it has a meaning of feeling detached or disconnected from your body and would be considered a type of mental disorder. I’m not using the word in that manner herein. Instead, the theme I’m invoking involves the humanization or dehumanization around us, floating the idea of being personalized to human needs or being depersonalized to them. That’s a societal frame rather than a purely psychological portrayal.

With that said, let’s use an example to get at my depersonalization and personalization theme.

Banking ATM As An Example Of Alleged Depersonalization

Banking is an area ripe with prior exhortations of how bad things would become once ATM’s took over as there would no longer be in-branch banking with human tellers (that’s what was predicted). We would all be slaves to ATM machines. You were going to stand in front of the ATM and yell out “where have all the humans gone?” as you fought with the banking system to perform a desired transaction.

Recently, I went to my local bank branch to make a deposit. I was in a hurry and opted to squeeze in this errand on my way to a business meeting. The deposit was somewhat sizable so I opted to go and perform the transaction with the human teller, rather than feed my deposit into the “impersonal” ATM.

Upon my coming up to the teller window, the teller provided a big smile and welcomed me to the bank. How’s my day going, the teller asked. The teller proceeded to mention that it had been a busy morning at the branch. Looking outside the branch window, the teller remarked that it looked like rain was on its way. I wanted to make the deposit and get going, yet could see that the chitchat, though friendly and warm, would keep dribbling along and wasn’t seemingly in pursuit of my desired transaction.

I presented my check to be deposited and was asked to first run my banking card through the pad at the teller window. I did so. The teller looked at my info on their screen and noted that I had not talked with one of their bankers for quite a while, offering to bring over a bank manager to say hello. I declined the gracious offer. The teller then noted that one of my CD’s was coming due in a month and suggested that I might want to consider various renewal options. Not just now, I demurred.

The teller finally proceeded with the deposit and then, just as I was stepping away to head-out, called me back to mention that they were having a special this week on opening new accounts. Would I be interested in doing so? As you can imagine, in my haste to get going, I quickly said no thanks and tried to make my way to the door. Turns out that the teller had already signaled to the bank manager and I was met at the door with a thanks for coming in by the pleasant manager, along with handing me a business card in case I had any follow-up needs.

Let’s unpack or dissect my in-branch experience.

On the one hand, you could say that I was favorably drenched in high-touch service. The teller engaged me in dialogue and tried to create a human-to-human connection. Rather than solely focusing on my transaction, I was offered a bevy of other options and possibilities. My banking history at the bank was used to identify opportunities for me to enhance my banking efforts at the bank. All in all, this would seem to be the picture-perfect example of human personalized service.

Having done lots of systems work in the banking industry, I know how hard it can be to get a branch to provide personalized and friendly service. One bank that I helped fix had a lousy reputation when I first was brought in, known for having branches that were terribly run. Whenever you went into a branch it was like going to a gulag. There were long lines, the tellers were ogres, and you felt as though you were a mere cog in a gigantic wheel of their banking procedures, often making the simplistic banking acts into a torturous affair.

Thus, my recent experience of making my deposit at my local branch should be a shining example of what a properly run bank branch is all about. If I were to have to choose between the somewhat over-friendly experience versus going to a branch that was like descending into Dante’s inferno, I certainly would choose the overly friendly case.

Nonetheless, I’d like to explore more deeply the enriched banking experience. I was in a hurry. The friendly dialogue and attempts to upsell me were getting in the way of a quick in-and-out banking transaction. In theory, the teller should have judged that I was in a hurry (I assure you that I offered numerous overt signals as such) and toned down the ultra-service effort. It is hard perhaps to fault the teller and one might point at whatever pressures there are on the teller to do the banking jingle, perhaps drummed into the teller as part of the training efforts at the bank and likely baked into performance metrics and bonuses.

In any case, I walked out of the branch grumbling and vowed that in the future I would use the ATM. Unfair, you say? Maybe. Am I being a whiner by “complaining” about too much personalized service? Maybe. But it shouldn’t be that I have to make a choice between the rampant personalized service versus the utterly depersonalized gulag service. I should be able to choose which suits my service needs at the time of consuming the service.

About a week later, I had to make another deposit and this time used the drive-thru ATM. After entering my PIN, the screen popped-up asking if I was there to make a deposit, and if so, there was a one-click offered to immediately shift into deposit taking mode. I used the one-click, slipped my check into the ATM, it then scanned and asked me to confirm the amount, which I did, and the ATM then indicated that I usually don’t get a printed receipt and unless I wanted one this time, it wasn’t going to print one out.

I was satisfied that the deposit seemed to have been made and so I put my car into gear and drove on. The entire transaction time must have been around 30 seconds at most, making it many times faster than when I had made a deposit via the teller. I did not have to endure any chitchat about the weather. I was not bombarded with efforts to upsell me. In-and-out, the effort was completed, readily, without fanfare.

Notice that the ATM had predicted that I was there to make a deposit. That was handy. Based on my last several transactions with the bank, the banking system had analyzed my pattern and logically deduced that I was most likely there to make a deposit. And, I was offered a one-click option to proceed with making my deposit, which showcased that not only was my behavior anticipated, the ATM tailored its actions to enable my transaction to proceed smoothly.

Would you say that my ATM experience was one of a personalized nature or a depersonalized nature?

Deciding On Whether There Is Depersonalization Or Personalization

We always tend to assume that whenever something is “depersonalized” that it must be bad. The word has a connotation that suggests something untoward. Nobody wants to be depersonalized. In the case of the ATM, I wasn’t asked about the weather and there wasn’t a smiling human that spoke with me. I interacted solely with the automation. If that’s the case, ergo I must be getting “depersonalized” service, one would assume.

Yet, my ATM experience was actually personalized. The system had anticipated that I wanted to make a deposit. This had been followed-up by making the act of depositing easy. Once I had made the deposit, the ATM did not just spit out a receipt, which often is what happens (and I frequently see deposit receipts laying on the ground near the ATM, presumably leftover by hurried humans). The ATM knew via my prior history that I tended to not get a receipt and therefore the default was going to be that it would not produce one in this instance.

Given the other kinds of more sophisticated patterns in my banking behavior that could be found by using AI capabilities, I thought that this ATM experience was illustrative of how even simple automation can provide a personalized service experience. Imagine what more could be done if we added some hefty Machine Learning or Deep Learning into this.

I’ve used the case of the banking effort to help illuminate the notion of what constitutes personalization versus depersonalization. Many seem to assume that if you remove the human service provider, you are axiomatically creating a depersonalized service. I don’t agree.

Take a look at Figure 1.

As shown, the performance of a service act consists of the service provider and the receiver of the service, the customer. Generally, when considering depersonalized service, we think about the service provider as being perfunctory, dry, lacking in emotion, unfeeling, aloof, and otherwise without an expression of caring for the customer. We also then think about the receiver of the service, the customer, and their reaction of presumably becoming upset at the lack of empathy to their plight as they are trying to obtain or consume the service.

I argue that the service provider can provide a personalized or depersonalized service, either one, even if it is a human providing the service. The mere act of having a human provide a service does not make it personalized. I’m sure you’ve encountered humans that treated you as though you were inconsequential, as though you were on an assembly line, and they had very little if any personalization, likely bordering on or perhaps fully enmeshed into depersonalization.

A month ago, I ventured to use the Department of Motor Vehicles (DMV) office and was amazed at how depersonalized they were able to make things. Each of the human workers in the DMV office had that look as though they would prefer to be anyplace but working in the DMV. The people flowing into the DMV were admittedly rancorous and often difficult to contend with. I’m sure these DMV workers had their fill each day of people that were grotesquely impolite and cantankerous.

In any case, there were signs telling you to stand here, move there, wait for this, wait for that. Similar to getting through an airport screening, this was a giant mechanism to move the masses through the process. I’m sure it was as numbing for the DMV workers as it was for those of us there to get a driver’s license transaction undertaken.

Let’s all agree then that you can have a human that provides a personalized or a depersonalized service, which will be contingent on a variety of factors, such as the processes involved, the incentives for the human service provider, and the like.

I’d to like next assert that automation can also provide either a personalized service or a depersonalized service. Those are both viable possibilities.

Depends On How The Automation Is Devised

It all depends upon how the automation has been established. In my view, if you add AI to providing a service, and do it well, you are going to have a solid chance of making that service personalized. This won’t happen by chance alone. In fact, by chance alone, you are probably going to have AI service that seems depersonalized.

We might at first assume that the automation is going to be providing a depersonalized service, likewise we might at first assume that a human will provide a personalized service. That’s our usual bias. Either of those assumptions can be upended.

Furthermore, it can be tricky to ascertain what personalized versus depersonalized service consists of. In my example about the bank branch and the teller, everything about the setup would seem to suggest a high-touch personalized service. I’m sure the bank spent a lot of money to try and arrive at the high-touch level of service. Yet, in my case, in the instance of wanting to hurriedly do a transaction, the high-touch personalized service actually defeated itself.

That’s a problem with having personalized service that is ironically inflexible. It is ironic in that the assumption is that personalized means that you will be incessantly presented with seeming personalization. Instead, the personalization should be based on the human receiving the service and what makes most sense for them. Had the teller picked-up on the aspect that I was in a hurry, it would have been relatively easy to switch into a mode of aiding my transaction and getting me out of the bank, doing so without undermining the overarching notion of personalization.

For those of you that are AI developers, I hope that you keep in mind these facets about depersonalization and personalization. Namely, via AI, you have a chance at making a service providing system more responsive and able to provide personalization, yet if you don’t seek that possibility, the odds are that your AI system will appear to be the furtherance of depersonalization.

Humans interacting with your AI system are more likely to be predisposed to the belief that your AI will be depersonalizing.

In that sense, you have a larger hurdle to jump over. In the case of a human providing a service, by-and-large we all tend to assume that it is likely to be imbued with personalization, though for circumstances like the DMV and airport screening we’ve all gotten used to the idea that you are unlikely to get personalized service in those situations (when it happens, we are often surprised and make special note of it).

You also need to take into account that there is personalization of an inflexible nature, which can then undermine the personalization being delivered. As indicated via the bank branch example, using that as an analogy, consider that if you do have AI that seems to provide personalization, don’t go overboard and force whatever monolithic personalization that you came up with onto all cases of providing the service. Truly, personalized service should be personalized to the needs of the customer in-hand.

AI Self-Driving Cars As An Example

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are numerous ways in which the AI can either come across as personalized or depersonalized, and it is important for auto makers and tech firms to realize this and devise their AI systems accordingly.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of depersonalization and personalization, let’s consider how AI self-driving cars can get involved in and perhaps mired into these facets.

Bike Riders And AI Self-Driving Cars

I was speaking at a recent conference on AI self-driving cars and during the Q&A there was an interesting question or point made by an audience member. The audience member stood-up and derided human drivers that often cut-off bike riders.

She indicated that to get to the conference, she had ridden her bike, which she also rides when going to work (this event was in the Silicon Valley, where bike riding for getting to work is relatively popular). While riding to the convention, she had narrowly gotten hit at an intersection when a car took a right turn and seemed to have little regard for her presence as she rode in the bike lane.

You might assume that the car driver was not aware that she had been in the bike lane and therefore mistakenly cut her off. If that was the case, her point could be that an AI self-driving car would presumably not make that same kind of human error. The AI sensors would hopefully detect a bike rider and then appropriately the AI action planner would attempt to avoid cutting off the bike rider.

It seemed though that she believed the human driver did see her. The act of cutting her off was actually deliberate. The driver was apparently of a mind that the car had higher priority over the bike rider, and thus the car could dictate what was going to happen, namely cut-off the bike rider so that the car could proceed to make the right turn. I’m sure we’ve all had situations of a car driver that wanted to demand the right-of-way and figured that a multi-ton car has more heft to decide the matter than does a fragile human on a bicycle.

What would an AI self-driving car do?

Right now, assuming that the AI sensors detected the bike rider, and assuming that the virtual world model was updated with the path of the bike rider, and assuming that the AI action planner portion of the system was able to anticipate a potential collision, presumably the AI would opt to brake and allow the bike rider to proceed.

We must also consider the traffic situation at the time, since we don’t know what else might have been happening. Suppose a car was on the tail of the AI self-driving car and there was a risk that if the AI self-driving car abruptly halted, allowing the bike rider to proceed, the car behind the AI self-driving car might smack into the rear of the AI self-driving car. In that case, perhaps the risk of being hit from behind might lead the AI to determine that the risk of cutting off the bike rider is less overall and therefore proceed to cut-off the bike rider.

I mention this nuance about the AI self-driving car and its choice of what to do because of the oft times assumption by many that an AI self-driving car is always going to do “the right thing” in terms of making car driving decisions. In essence, people often tell me about situations of driving that they assume an AI system would “not make the same mistake” that a human made, and yet this assumption is often in a vacuum. Without knowing the context of the driving situation, how can we really say what the “right thing” was to do.

For my article about the human foibles of driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For the use of probabilities and uncertainty in AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/probabilistic-reasoning-ai-self-driving-cars/

For the safety aspects of AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my article about defensive driving for AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

In any case, you might argue that the question brought up by the audience member is related to personalization and depersonalization. If the human driver was considering the human bike rider in a depersonalized way, they might have made the cut-off decision without any sense of humanity being involved.

Here’s what might have been taking place. That’s not a human on that bicycle, it is instead a thing on an object that is moving into my path and getting in my way of making that right turn, the driver might have been thinking. Furthermore, the driver might have been contemplating this: I am a human and my needs are important, and I need to make that right turn to proceed along smoothly and not be slowed down. The human driver objectifies the bike rider. The bike is an impediment. The human on the bike is meshed into the object.

Now, we don’t know that’s what the human driver was contemplating, but it is somewhat likely. It is easy when driving a car to fall into the mental trap that you are essentially in a video game. Around you are these various objects that are there to get in your way. Using your video gaming skills, you navigate in and around those objects.

If this seems farfetched, you might consider the emergence of road rage. People driving a car will at times become emboldened while in the driver’s seat. They are in command of a vehicle that can determine life-or-death of others. This can inflate their sense of self-importance. They can become irked by other drivers and by pedestrians and decide to take this out on those around them.

For my article about road rage, see: https://www.aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/

As I’ve said many times in my speeches and in my writings, it is a miracle that we don’t have more road rage than we already have. It is estimated that in the United States alone we drive a combined 3.22 trillion miles. Consider those 250 million cars in the United States and the drivers of those cars, and how unhinged some of them might be, or how unhinged they might become as a result of being slighted or they perceived they were slighted while driving, and it really is a miracle that we don’t have more untoward driving acts.

Encountering Humans In A Myriad of Ways

Back to the bike rider that got cut-off, there is a possibility that the human driver depersonalized the bike rider. This once again illustrates that humans are not necessarily going to provide or undertake personalized acts in what they do.

An AI self-driving car might or might not be undertaking a more personalized approach, depending upon how the AI has been designed, developed, and fielded.

Take a look at Figure 2.

As shown, an AI self-driving car is going to encounter humans in a variety of ways. There will be human passengers inside the AI self-driving car. There will be pedestrians outside of the AI self-driving car and that the AI self-driving car comes across. There will be human drivers in other cars, of which the AI self-driving car will encounter while driving on the roadways. There will be human bike riders, along with other humans on motorcycles, scooters, and so on.

If you buy into the notion that the AI is by necessity a depersonalizing mechanism, meaning that in comparison to human drivers the AI driver will be acting toward humans in a depersonalized manner, more so than presumably other human drivers would, this seems to spell possible disaster for humans. Are all of these humans that might be encountered going to be treated as mere objects and not as humans?

The counter-argument is that the AI can be embodied with a form of personalization that would enhance the AI driver over the at-times depersonalizing human driver. The AI system might have a calculus that assesses the value of the bike as based on the human riding the bike. Unlike the human driver of earlier mention, presumably the AI is going to take into account that a human is riding the bike.

In the case of interacting with human passengers, there is a possibility of having the AI make use of sophisticated Natural Language Processing (NLP) and socio-behavioral conversational computing. In some ways, this could be done such that the personalization of interaction is on par with a typical human driver, perhaps even better so.

Have you been in a cab or taxi whereby the human driver was lacking in conversational ability, and unable to respond when you asked where’s a good place to eat in this town? Or, the opposite extreme, you’ve been in a ridesharing car and the human driver was trying to be overly responsive by chattering the entire time, along with quizzing you about who you are, where you work, what you do. That’s akin to my bank teller example earlier.

Goldilocks Approach Is Best

AI developers ought to be aiming for the Goldilocks version of interaction with human passengers. Not too little of conversation, and not too much. On some occasions, the human passenger will just want to say where they wish to go and not want any further discussion. In other cases, the human passenger might be seeking a vigorous dialogue. One size does not fit all.

For socio-behavioral computing, see my article: https://www.aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For the use of ESP2, see my article: https://www.aitrends.com/selfdrivingcars/extra-scenery-perception-esp2-self-driving-cars-beyond-norm/

For how the AI might interact during family trips, see: https://www.aitrends.com/selfdrivingcars/family-road-trip-and-ai-self-driving-cars/

For the use of NLP for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

In terms of interacting with humans that are outside of the AI self-driving car, there is definitely a bit of a problem on that end of things.

Just the other day, I drove up to a four-way stop. There was another car already stopped, sitting on the other side of the intersection, and apparently waiting. I wasn’t sure why the other driver wasn’t moving forward. They had the right-of-way. Were they worried that I wasn’t going to come to a stop? Maybe they feared that I was going to run thru the stop signs and so they were waiting to make sure I came to a stop.

Well, after fully coming to a stop, I watched to see what the other car was going to do. Still no movement. I realize that most drivers in my shoes would zoom ahead, figuring that whatever the issue was about the other driver, it didn’t matter. I was concerned that the other driver might suddenly lurch forward and maybe ram into me as I drove though the intersection. They were already doing something weird, and so in my mind they were prone to weirdness of driving action.

I rolled down my window and waved my arm at the other car, suggesting that they were free to move ahead. The other driver rolled down their window, popped their head out, and yelled something unintelligible (I was too far away to hear them), and proceeded to drive forward. After the car cleared the intersection, I also proceeded forward.

In this case, we humans communicated directly with each other, albeit somewhat imperfectly. I’ve described this in my writings and speeches as the “head nod” problem of AI self-driving cars.

How will an AI self-driving car communicate with humans that are outside of the car? They cannot nod their head or wave their arms, unless we decide to put robots into the driver’s seat. Some auto maker and tech firms are outfitting the exterior of their AI self-driving cars with special screens or displays, allowing the AI to communicate via those means with humans outside of the self-driving car.

If an AI self-driving car has no means to do head nods or hand waving, it would likely be ascribed as depersonalizing that aspect of the driving act. The inclusion of special exterior screens or displays is an attempt to personalize the AI, making it seem less aloof and less non-human.

How far should we go in this? There are some concept cars that have large eyeball-like globes on the front of the self-driving car, and animated eyes that move back-and-forth, in the manner that a human eye glaze might move. Useful? Creepie? Time will tell.

Human car drivers are supposed to use their blinkers to signify their driving actions. AI self-driving cars also use blinkers. In that manner, they are the same.

Human drivers often use little micro-movements of the car, such as the tire positions and where they lean the car toward, in order to suggest driving actions that are imminent. We don’t yet have AI self-driving cars mimicking this behavior, though I’ve predicted that we ought to and will do so.

Human drivers can try to make their car appear more conspicuous. This might include honking the horn. It can include turning your headlights on and off or using your high beams and then your low beams. These are all means by which the human driver can use to communicate with other humans. Likewise, the AI ought to be doing the same.

There is right now a dangerous timidity about most AI self-driving cars that makes them vulnerable to being pranked by humans. If you are a pedestrian and know that you can out-psych the AI by appearing to be moving from the curb to the street, getting the AI to bring the self-driving car to a halt mid-street, the odds are that we’ll have lots of humans doing this. Critics of such actions by humans are saying we should outlaw those actions. Though outlawing it might be one means, I vote that we focus on making the AI good enough that it cannot get pranked, just as human drivers generally are not pranked.

For my article about the head nod aspects, see: https://www.aitrends.com/selfdrivingcars/head-nod-problem-ai-self-driving-cars/

For the pranking of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/pranking-of-ai-self-driving-cars/

For egocentric AI systems, see my article: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For the pedestrian roadkill issue, see: https://www.aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

For the conspicuity aspects, see my article: https://www.aitrends.com/selfdrivingcars/conspicuity-self-driving-cars-overlooked-crucial-capability/

Conclusion

I claim that depersonalization is not inevitable as the rise of more AI systems becomes prevalent.

If the AI systems are designed and developed such that they lack forms of personalization, I’d grant that we’ll end-up with a lot of depersonalizing automated systems. The upside of using AI is that the chances of being able to embrace personalization is enhanced. Let’s not squander that possibility.

AI self-driving cars are going to be one galvanizing lighting rod of qualms about depersonalization. For the design and development of AI self-driving cars, regrettably the personalization aspects are not especially yet being given their due by many of the auto makers and tech firms. The belief by some is that those are edge or corner cases, meaning that we can wait to deal with those aspects.

The first iterations of AI self-driving cars will likely determine the ongoing pace and acceptance of AI self-driving cars. We are setting ourselves up for a great deal of pushback if we delay or ignore the personalization factors. Those AI self-driving cars that seem to be depersonalizing will heighten the belief that they are not ready for prime time. Of course, that could be a correct assessment, namely that without proper personalization capabilities, maybe they shouldn’t be on our roadways.

Human-to-human involves personalization and depersonalization. AI-to-human also involves personalization and depersonalization. AI developers would be wise to seek the personalization side of things and overcome or avoid the depersonalization side of things. That’s what I personally say.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

 

Source: AI Trends

(Visited 1 times, 1 visits today)

Leave a comment

Your email address will not be published. Required fields are marked *