a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Intro to AI Ethics

Blog: Intro to AI Ethics


Credit: Unsplash

Ethical considerations when building and interacting with Artificially Intelligent systems

Why Ethics?

To even begin to think about AI ethics, we must first have a primer on more general ethics. As an engineer or some other non-philosopher it can be very easy to forget about ethics and simply build systems for the sake of building cool things. We must, however, be aware of the potential outcomes of our build decisions when it comes to highly complex, sophisticated, and potentially impactful systems. Especially systems who outcomes are going to be highly controversial.

Ethics is the branch of philosophy concerned with grounding decisions, beliefs, policies, etc. in some sort of framework for deciding right and wrong. Ethics looks to resolve such questions as human morality. By deriving some moral system we can prescribe value to some action or belief. There are some main areas of study in ethics that can be further broken into subcategories:

1. Applied Ethics — concerned with studying what is right or just and what is valuable

2. Normative Ethics — study of how people should/ought act

3. Meta-ethics — pursuit of understanding what is good or bad, what do these concepts of good/bad really mean?

Photo Cred: Unsplash

From the lens of ethics we can apply some of these principles to come to terms with what forms and uses of AI maximize good in the world and how AI ought behave. We can examine what it means for an Artificially Intelligent system to have a good impact on the world. We can decide how we should approach these new possibilities to maximize good for society. We will also need to establish what good actually means and what our desired outcomes are.

We talk a lot about building benevolent technology…Our technology reflects our values. — Fei-Fei Li

Job Loss

I think this problem is worth starting with; it probably has the most noticeable effect on the largest amount of people. It’s no surprise that many have already lost their jobs due to automation. And it’s no surprise that many more will lose their jobs going forward. The job loss really started in manufacturing where control systems and robotics have in many cases increased the safety, reliability, and output of manufacturing. This will only continue as more sophisticated systems are introduced.

If we can agree from an ethical standpoint that human safety is a good outcome, then maybe we can begin justifying this transition. Taking humans out of the line of fire in manufacturing environments has drastically reduced the number of injuries and deadly accidents. And that’s just the start. Automated systems can achieve higher output with higher reliability. Profit maximizing corporations will always seek to find ways to lower costs and increase output. It seems that in many cases we can agree automating manufacturing is a net-positive, despite the job loss. However, I think we can agree that training programs, assistance to get back into the labor market, etc. should be available for workers who lose their job as a result of automation.

What about service industries? In manufacturing, we added the safety factor as a key argument for job loss being warranted. What about service industries where safety is not really an issue? Let’s introduce some more arguments for why automation in service industries is potentially a net good for society. If you’re in the camp that believes the loss of jobs is not worth it in this case, what if those service industry workers could be retrained and enter jobs of similar pay? Are we not then maximizing efficiency? That worker that was automated away is now available to fill some role elsewhere. In this instance, we’ve both increased efficiency through automation in that workforce, and we have allowed a unit of labor to become available for work elsewhere.

If you’re not convinced, I totally understand. This is an incredibly complex issue and we need more discussions on how to solve this problem. Universal Basic Income (UBI), taxes on autonomy, and other potential solutions have been proposed. We must come together and combine understandings of economic outcomes, societal outcomes, and ethics to understand how best to approach this problem.

Wealth Distribution

In our discussion on job loss we somewhat implied that while an individual may lose a job in one industry, there will always be an opportunity in another. This may not be the case. If AI creates enough wide-scale displacement of labor, the amount of jobs created may be significantly outpaced by the number of jobs destroyed. This is not necessarily a bad thing.

With less need for labor comes more availability for humans to spend time focused on activities they enjoy and activities that maximize good for society. In the current economy, there is a strong incentive for laborers to maximize their human capital, independent of whether or not that decision makes them happy. Many workers may pursue careers they are not passionate about. Many may loathe their jobs. And many may not mind their job but would just really appreciate if they could work 3–4 days per week instead of 5. All of this becomes available in a society where work is sufficiently automated.

But we must find ways to distribute wealth appropriately. In a society where automation dominates profits, wealth will be flowing upward to a smaller number of individuals. Those running the firms responsible for driving the economy will of course be the direct benefactors of this automation. They are, after all, the ones providing the good or service. Do we just let the rest of society starve? No, of course not. Most would say some sort of wealth distribution model that provides displaced workers with a living wage is favorable. With the combined might of economists, ethicists, and other relevant parties we can likely find optimized solutions for distribution models that benefit both the unemployed individual and continued growth in industry.

Looking into the future, I see a reality where people are provided the proper incentives to pursue the passions they truly love. A more enlightened society where the wannabe philosopher actually pursues philosophy instead of picking a potentially more financially rewarding career seems like a good outcome. We can still reward inventors, creators, and laborers of all kinds with wealth when they create value for others, but we must avoid leaving behind the displaced in the process.

Data-Biasing

As engineers and enthusiasts in AI, we understand how heavily data-dependent these systems are. The quality of your model is usually a direct result of the quality and quantity of your data. From a basic perspective, I define good outcomes as those outcomes which increase the productivity or happiness for the most amount of people. You can imagine a myriad of situations in which classification problems could go wrong because of bias in past data. From an ethical perspective, I think we can all agree that systems which discriminate against individuals on the basis of race, gender, age, ethnicity, etc. are almost always not maximizing good.

Some bad outcomes:

  • Security systems trained to discriminate based on an individual’s race or gender rather than their actions, movements to commit the crime, etc.
  • Facial recognition systems that lack a diverse training set, resulting in only detecting the race for which they are trained
  • Court systems (AI judges/juries) with past biased rulings against certain races as the training data

How to Avoid Bias?

Ultimately, the majority of these issues can be solved by some human-centered approaches to acquiring, cleaning, labeling, and annotating data. But this can be especially difficult. Our AI in many ways are mirrors of the people who train them. If someone with bad beliefs according to our moral system is at the helm of creating an AI, they are likely to acquire data and train the system on data that that coincides with their beliefs. In many cases, the markets will handle this. Those solutions that discriminate will usually be handled by the markets, and discrimination will naturally make these products less competitive. However, this will not always be the case. We need to develop some approach for identifying AI that are not performing within our ethical framework and are producing net bad outcomes for society.

Military Usage of AI

We already see many uses of machine learning, robotics, and automation in current military technology. Drones which can use image recognition and identify targets, soldier augmentations that give them enhanced abilities on the battlefield, and robots which can be used for scouting or defusal are all good examples. It’s interesting to note that (mostly) the same robot which is used for saving a human from the rubble of a collapsed building is also used in a military setting to aid in eliminating the enemy.

Treaties and agreements to prevent the proliferation of military uses for AI can likely only go so far. We see the results of treaties for nuclear weapons; in some cases they are highly effective, in other cases they fall short.

Credit: Unsplash

We must examine as a society what the peak good outcomes are with regard to this issue. I think we can all agree that any form of war or loss of human life is an unfortunate tragedy that should be avoided at all costs. Ensuring that military use of AI is limited to only the most necessary of situations is of course incredibly complex and multi-dimensional. Unfortunately, building the framework or even trying to get involved in this conversation at the levels necessary to have an impact is incredibly difficult. We must ensure that within all levels of policy-making there are experts with deep knowledge in relevant fields. They must be able to apply their understanding of both AI and ethics to create outcomes that maximize good for humanity.

Final Thoughts

We must carefully consider all of our build decisions and consult the relevant experts when necessary. While it is always great to quickly rush out an MVP and iterate, we must be careful to not let decisions lacking ethical backing to snowball out of control. I am certainly not an expert on ethics or philosophy, but I’m doing my best to learn more and more about ethics in the context of AI. There are many more ethical considerations than were mentioned in this article. On that note, here’s an incredible compilation of all sorts of papers, videos, and general resources from some of AI ethics’ leaders: https://medium.com/@eirinimalliaraki/toward-ethical-transparent-and-fair-ai-ml-a-critical-reading-list-d950e70a70ea

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter