a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Debiasing AI, Got To Do So, Even For Driverless Cars

Blog: Debiasing AI, Got To Do So, Even For Driverless Cars


Dr. Lance Eliot, AI Insider

Hidden biases crop-up in AI systems and need to be surfaced and debiased

Some people seem to think that computers are great because they are objective, they are unbiased, they are absent of any prejudices. Computers are all about numbers and number crunching. They are neutral when it comes to human biases.

Not so!

Computers are what we make of them. They are created by humans and thus carry into their inner workings the foibles of humans.

Whenever I hear someone lament that a computer system goofed up, and they act like that’s just the way things are, it makes me go mad since I know and you know that the computer was programmed in such a manner that it allowed the snafu to occur. Someone the other day told me that their paycheck was miscalculated, and they acted like it was just some machinery glitch. Who programmed the paycheck calculations? Who allowed the system to be in the state or condition it is in? Let’s not let the humans get away with pretending that computers will just be computers.

The realization that biases do exist in computer systems is finally starting to get some attention. There have been recent instances of AI systems that exhibited various biases. We’ve got to stop those that think this is just the cost of achieving AI. Instead, we need to be aware of how the biases got into the AI and identify ways to spot it, remove it or at least be aware of it, and not be lulled into thinking that there’s nothing that can be done.

Examples of AI Systems Biases

One of the most readily visible and understood examples arose when the Natural Language Processing (NLP) system “word2vec” via Microsoft came up with this analogy: “Man is to Woman as Programmer is to Homemaker.” For those of you that aren’t living in this era, you might wonder what’s wrong with that analogy, well, let me assure you that suggesting that men are programmers and women are homemakers is a perhaps subtle but telltale gender related bias.

How did this come to arise? The NLP system was looking at large datasets and derived this analogy from those datasets. To that extent, you could say that the computer was being neural since it was merely doing a count of the associations between certain words. It presumably found a large count of males associated with programming, and a large count of females being associated with homemaker. It is then an easy step to arrive at the analogy that it derived.

Should we just accept that the AI landed on that aspect and not consider it bias? Would you say that the AI itself is not biased and that it is merely reflecting the underlying data and human values? Even if we concede those aspects, the problem will be that the AI system is going to report these results to humans, and then those humans will take it as somehow “objective” and thus must be true.

Here’s another famous example. The Nikon S630 camera had a nifty new feature that would try to detect if someone was blinking when a picture was taken. This is a handy feature. I am sure you’ve taken a picture and had someone that blinked, and then didn’t realize until a day later that they had blinked, and you wished that you had known right away so you could retake the picture. The Nikon camera instantly scans each image to try and alert you that someone has blinked. Great!

The problem was that when the Nikon S630 was used to take pictures of those that might have naturally more closed eyelids, it reported that they were blinking. There was an outrage expressed by the Asian community that the camera was biased against them. This turned out to be quite an embarrassment for Nikon and they assured the world it was a completely unintentional aspect.

Unintentional or intentional, the crux of this is that we need to be on the watch for AI that has biases.

We also need to be aware of how the biases crept into the AI to begin with.

Developers of AI systems need to be mindful of watching for the biases and also mindful of what to do if the biases are there. Let’s assume that most of the time the biases are not being purposely planted (which, of course, does as arise as a possibility), and instead that it just is occurring by happenstance. The happenstance aspect though is not a valid excuse for not realizing it happens and for doing something about it.

Here’s another notable example of how Machine Learning or Deep Learning can get itself mired into “learning” the wrong thing. It is said that the Department of Defense was trying to analyze pictures of tanks. They wanted to be able to use the computer to distinguish US tanks from Russian tanks. Rather than programming it per se, they used lots of pictures of tanks and labeled the pictures as either showing a Russian tank or a US tank. At first, it seemed that the system was able to distinguish between the two. After more careful inspection, it turns out that the algorithm was only focusing on the aspects of how grainy the photo was. In other words, the Russian tank photos tended to be very grainy and had been taken with a less than perfect photographic opportunity, while the US tanks were perfectly photographed. The algorithm simply caught onto the aspect that the difference between Russian tanks and US tanks was that one was grainy and the other was not.

Impacts of AI Biases for AI Driverless Cars

What does this have to do with self-driving driverless autonomous cars?

At the AI Cybernetic Self-Driving Car Institute, we are working on debiasing the AI that is driving self-driving cars. We are making developers aware of the biases that can creep into the AI, and we are creating tools to detect and try to prevent or mitigate such biases.

One aspect as a now well-known example involves the GPS system and the routing of travel plans. Some already have noted that there are potential biases built into various GPS travel planners. For example, a route from one city over to another city might intentionally avoid going through downtown areas that are blighted. The traveler does not know that the algorithm has purposely chosen such a route.

Some say that by purposely avoiding the bad areas of town, the AI is essentially hiding from the traveler that there are bad parts of town. Maybe if the traveler witnessed this, they would be uplifted to help improve those bad areas of town. On the other hand, some say that the traveler probably does not want to have to see the bad parts of town, and so the GPS system is doing them a favor, and presumably they would be thankful that it routed them via a presumably safer route.

The key here is that there is a hidden bias in the routing system. It is unknown to the human traveler. At least if the human traveler knew the biased existed, they could then make their own choice about what they wanted to do. Furthermore, if nobody even knows the bias exists, even the developers of the routing system, that’s even more disconcerting since then the bias happens and no one is the wiser about it.

You can imagine how dangerous these hidden biases are when you think about AI that is doing financial decision making. There are now apps that will automatically decide whether someone is worthy of a loan. The AI in that app might have biases about who is loan worthy. The maker of the app might not realize it, and those trying to get loans might not realize it. There are legal efforts to try and force such app makers to be more aware of biases in their software and making sure that it is other publicized or excised.

For self-driving cars, we can revisit the famous Trolley problem to consider how biases might get mired into the AI.

The Trolley problem is that when a self-driving car needs to make a decision in real-time as to avoid killing say the occupants of the car versus killing a pedestrian, which way should the self-driving car AI go? This could happen if the self-driving car is going along on a street and suddenly a child jump out into the middle of the street. If the self-driving car doesn’t have sufficient time to avoid hitting the child, what should it do? It could command the car to swerve off the road and maybe rams into a tree, possibly killing the occupants of the car, but saving the child. But if the only recourse seems to be able to save the occupants by hitting the child then perhaps the AI proceeds to hit and kill the child.

How does the AI come to make these kinds of life and death decisions? Some would say that we should have the AI learn from thousands upon thousands of instances that are in a training set, and the neural network would find patterns to go on. If the pattern was that the child dies, so be it. But, wouldn’t you as the occupant or owner of the self-driving car want to know that’s the inherent bias of the car? You probably would.

What To Do About Hidden Biases

There are some that are advocating that all self-driving cars will need to report what their AI biases consist of. Similar to how buying a traditional car requires admissions about how fast the car goes, whether it is a gas guzzler, and so on, there some clamoring for legislation that would require a self-driving car to be sold with all sorts of stated aspects about its biases. The buyer could then decide if they wish to buy such a car.

This though doesn’t work so well for those that are mere occupants in a self-driving car. If you happen to get into a self-driving car that is being offered by a ride sharing service, how are you going to know what biases the AI has?

If it were to showcase all its biases, you might spend more time reading or hearing about it than would be the length of the ride itself. You could also argue that when you get into a human driven cab that you don’t ask the ridesharing or taxi driver to recite all of their human biases. Maybe the ridesharing or taxi driver doesn’t like green-eyed people and is determined to run them over. You have no way of knowing about that bias.

Furthermore, the bias of the AI of self-driving car is not necessarily static.

When you buy a self-driving car and it is fresh off the lot, it might have biases X. Once it is driving around town, it is presumably doing additional learning, and so it is likely gaining new biases. Those biases are now Y. You bought the car believing it had biases X, and now a mere week later the biases are actually Y. This is a reflection that we are expecting self-driving cars to learn and adapt over time. While it is driving along, perhaps it comes up with a pattern that if there are more than three children on the side of the road that they will likely attack the car, and so the AI then always opts to swerve the car away from places where there are three children grouped together or maybe speeds-up to avoid them.

If you are an occupant in the car, you would not likely have any means of knowing that this bias has been formed. You might think it odd that the self-driving car sometimes speeds-up in places that there doesn’t seem to be any obvious reason to do so, but would not necessarily be able to connect the same dots as to what the self-driving car is doing. Unless the self-driving car tells you what it is doing, you might not realize the bias is there. Of course, even the self-driving car itself might not realize what it is doing and only have some morass of a neural network that is guiding it along.

Built-in Biases and Emergent Biases

You need to keep in mind then that there might be built-in bias. Plus, there is likely to be emergent bias.

These biases are tricky to catch because they can occur unintentionally, they can be non-obvious, and they can potentially come and go as the AI system is changing and learning over time.

We’re developing a macro-AI capability that essentially acts as a self-awareness for the AI of the self-driving car.

You might think of this as the AI watching the AI, trying to spot any behaviors that seem to exhibit biases. Humans that are reflective do the same thing. I might be watching myself to make sure that I don’t treat males and females differently. If I get into a situation where I am suddenly treating a female differently or a male differently, if I am self-aware then I might realize I seem to be violating a core principle. I can either than stop the behavior, or at least be aware of the behavior happening and make an explicit decision whether to continue or not.

The same goes with debiasing the AI of self-driving cars. We are striving to help developers keep from letting biases get into their AI at the time of development, and also providing a check-and-balance system capability when the AI is in the wild. The real-time check-and-balance can then deal with adaptive behaviors of the AI and try to either overturn the bias or at least alert the self-driving car about the bias. It could also warn the auto maker, or the owner or occupant of the self-driving car.

Biases that get married into a collective that is being used by multiple self-driving cars will be both harder to detect and yet easier to potentially solve. If multiple self-driving cars are sharing their experiences into a centralized system, you could put a monitoring tool onto the centralized system to try and find biases that are creeping into the collective from the self-driving cars.

This can also then be potentially remedied and quickly shared back out to the self-driving cars by an over-the-air updated. The downside is that you could also use that over-the-air update to inadvertently quickly push out a new bias, if you didn’t realize the new bias was there.

Conclusion

Debiasing of AI is a growing topic and will continue to get increasing amount of attention.

The more obvious areas such as image recognition and loan decisions will be the first places that debiasing gets the most effort. For self-driving cars, the auto makers are right now struggling to just get cars that can drive, let alone worrying about debiasing them. Once we have some self-driving cars on our roadways, and once those self-driving cars and their AI make some life or death decisions, I am betting you that all of a sudden, the debiasing of AI for self-driving cars will become a top-of-mind issue for everyone, including the general public, regulators, auto makers, and everyone else.

That’s my “biased” opinion!

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For my Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

Copyright © 2019 Dr. Lance B. Eliot

Source: Artificial Intelligence on Medium

(Visited 2 times, 1 visits today)
Post a Comment

Newsletter