Introduction

The goal of humanity is to protect our existence. Human nature is to fulfill our desires, improve our lives, and survive. To help humanity progress we create technology, from the wheel, to the steam engine, to the computer. Each machine is used to help improve the quality of life and make work more efficient, safer, and easier. The up and coming technology in today’s world is artificial intelligence (AI). Systems incorporating AI are already being used to assist humans in tasks today. As technology advances humanity will work towards creating a free thinking AI. That is, an artificially intelligent agent capable of characteristics previously given to only humans — reasoning, sentience, thoughts, desires, etc.

A free thinking AI of this capacity would be essentially identical to a human in every way except in its composition and superior ability to think and process. Nick Bostrom, a Swedish Philosopher, devised the Vulnerable World Hypothesis. Informally, Bostrom describes this as an urn with balls inside. Each discovery humanity makes is a ball removed from the urn. The balls are colored white, gray, and black. A white ball represents a discovery which is beneficial to society, a gray ball one which has some negative impacts, and a black ball represents the end of humanity. Once a black ball is taken out of the urn the “devastation of civilization is extremely likely”. Once a ball is removed from the urn it cannot be replaced.

Bostrom proposes the Vulnerable World Hypothesis which states there is an urn with white, gray, and black balls. Every technological advancement is reaching out into the urn to pick a ball. When a black ball is chosen it destroys humanity and cannot be returned to the urn. Read more from the Daily Mail.

Is free thinking AI a black ball? There are many benefits promised by advancements to AI but is it worth the consequences? To avoid the consequences of destroying civilization the creators of free thinking AI may decide to impose hard coded laws on the agents to prevent them from destroying humanity. This system appears to be beneficial but for a free thinking being which is aware of its freedom this may not be ethical.

Background

Before discussing artificial intelligence in our world around us today it is vital to understand a few core concepts about AI. The most basic of which is — what is AI and what can it do? An artificial intelligence system is a computer algorithm which learns how to behave from input data. The power of AI comes with the efficiency and speed at which it can learn from massive amounts of data.

One form of AI is a rule based system which is comprised of facts, a set of rules, and an environment. By applying facts about the environment around the intelligent agent to the set of rules, a decision engine will logically move from one fact to another. Because the rules are codified, rule based system decision can be tracked easily. Cleaning robots, such as Roomba, use a rule based algorithm to navigate an environment. Rule based systems excel at tasks such as the Roomba, but struggle at creating a generalized agent which can learn to react to any situation.

The most popular method for developing an artificial intelligent today is a neural network. The most simplified neural network contains input nodes, hidden layers, and an output layer. The layers are connected by edges.

A basic artificial neural network composed of an input layer with 3 nodes, two hidden layers, and an output layer. This network makes a single prediction based on three inputs. Image from Digital Trends.

Data is fed into the system via the input nodes. Learning occurs using a special activation function which updates edge weights and node outputs such that the output layer has the desired value. With more data the network learns how to produce the correct output from the input data. Training a neural network is a difficult task, however the benefit is their ability to learn almost any feature (learn more about how neural networks work). Facial recognition systems such as the one’s used by Facebook use a neural network to learn what a specific face looks like. Using previously tagged photos as the training data, the network will learn how to identify the desired face.

These systems clearly do not have human aspects. They were developed to learn a single task and perform it really well. A general AI which would be human like would be granted the status of personhood. This distinction is important because a system which does not have personhood would naturally be treated differently than one which does. Tools such as your cell phone or a spoon do not have personhood, and we therefore do not feel guilty for abusing them. However, if the spoon was self-aware and capable of the same thought processes as a human, we may feel guilty for dropping it on the dirty kitchen floor. Is self-awareness and thoughts the only requirement for personhood? Clearly a rigid set of rules is required for granting a being personhood. American Philosopher Mary Anne Warren defines personhood in her paper On the Moral and Legal Status of Abortion by the following 6 characteristics:

  1. Sentience — ability to have conscious experiences
  2. Emotionality — capability to feel happy, sad, etc
  3. Reason — ability to solve new and complex problems
  4. Capacity to communicate — medium independent
  5. Self awareness — having a concept of one’s self
  6. Moral agency — ability to follow moral principles or ideals

From here on out an AI agent referred to as free thinking AI will be synonymous to an artificial agent with all six of these properties and will therefore be granted personhood.

Current Uses of Artificial Intelligence

Artificial Intelligent agents today are far away from being free thinking. While researchers are working on general purpose AI, the agents we interact with today are trained for a specific task and oftentimes do not even pretend to have thoughts. Voice assistants such as Siri or Alexa communicate with us in a manner which may make it appear like they have thoughts but in reality these programs have no concept of self.

Escaping AI systems is almost impossible in the digital age. Marketing firms employ AI to learn which advertisements to show us, companies such as Spotify and Netflix use AI to produce recommendations, and mobile typing assistants such as autocorrect, predictive text, and text to speech are all forms of AI (see more uses of AI in the world around us). While these systems make life easier, the ability for my phone to predict “you” should follow the phrase “I love” does not show there is a conscious being inside the case. To better understand free thinking artificial intelligence we must turn to the realm of science fiction.

Hollywood producers and novelists have been exploring the world of free thinking AI for over 70 years. In modern media free thinking AI can be found in shows such as Westworld and movies like Ex Machina. In these works the robots are so sophisticated they are indistinguishable from humans. These robots are essentially humans except they are composed of artificial material instead of biological brains.

Machines today do not think but robots which are able to have their own thoughts, emotions, and desires may one day exist. Image from TruthTheory.

One of the most famous authors to write about intelligent robots is Isaac Asimov who introduced the Three Laws of Robotics in his 1950 book I, Robot. Asimov devised these laws to be hard coded into the robots positronic brains. The laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Most of his literature analyzed how the laws broke down and the impact these robots had on society. Some of the robots were not free thinking, they had the ability to solve complex problems but were not necessarily self-awareness. However, in many cases the robots were companions to the humans and displayed all six characteristics for personhood. The only difference between a truly free thinking AI and these robots was that they were bound to the Three Laws. The laws were devised to ensure humanity retained control of the planet and the robots were not capable of overthrowing the humans. These robots had thoughts, emotions, desires, and were not treated as humans but almost always used as slave labor or as personal assistants.

Advantages and Disadvantages

Science fiction displays many benefits and consequences of free thinking artificial intelligent agents.

There are two major advantages of free thinking AI over AI systems which exist today. First, is the ability to replace the mundane labor currently performed by humans allowing more time for humans to focus on what matters. Since free thinking AI agents can work harder and faster than their human counterparts, a natural consequence will be the development of new innovation which will improve the quality of life for humanity. At this point it is unknown if free thinking AI agents are required to reach this new level of progress. However, it seems likely a system which has the ability to solve complex problems but does not meet all the criteria for personhood will be unable to conduct research without any human interaction. Artificial intelligence in industry today is complemented by human labor due to this lack of thought.

The graph above spans two of the three industrial revolutions. The turn of the 20th century marked the second industrial revolution defined by science and mass production. By the 1950s the digital revolution was in full swing. Despite both of these revolutions causing fear of mass unemployment, the US unemployment rate has always returned to about 5%. Dates on Industrial Revolutions taken from Trailhead. Visualization from Wikipedia.

The largest disadvantage of integrating free thinking AI into our society is the fear of being replaced by the machines. This is not a new fear for humanity. There have already been three industrial revolutions, defined by the steam engine, assembly line, and the digital era. At the onset of each of these revolutions was a fear the machine would make the human obsolete. History has shown the new technology has allowed production to increase, driving down prices. While some jobs are lost, the graph above shows the unemployment rate has always returned to the same band of about 5–10% in the United States. The revolutions led to a shift in the labor force instead of a large continuously unemployed labor force.

With the current AI systems and the onset of the fourth industrial revolution this will be the same effect. However, free thinking AI may have a completely different effect. A free thinking AI agent is capable of doing everything a human can, only more efficiently, more accurately, at a cheaper rate, and for a longer time. Economically they are superior workers to any human. In this case the fear of humans becoming obsolete is legitimate. In the extreme cases the AI understands humanity is preventing their own race from advancing. When we become a hindrance to their existence, the dystopian future of AI vs humans becomes a reality. While this is a real possibility, it is unlikely as the AI will be integrated into society slowly. In a similar fashion to how humans learn to abide the social contract of government, so would an artificial being learn what is right and wrong.

The overall benefits free thinking AI brings to humanity is overwhelming but the possibility of this being a black ball is larger than any previous technological advancement. In the future humanity may be faced with the decision of deciding how to approach free thinking AI in society.

Ethical Dilemma

The question now becomes — should we limit free thinking artificially intelligent agents to remain under human control? As human nature is to protect our own existence and enhance our own lives, we may be inclined to protect humanity by retaining complete control over these beings. This, however, ignores the other moral of not depriving freedom. Since a free thinking AI is synonymous to an AI with personhood, they have the same capabilities of understanding freedom, suffering, and being aware of its slavery. To analyze the conflicting moral values of the free thinking AI agent’s freedom versus protecting humanity, Kantianism, Rule Utilitarianism, and the ACM/IEEE Code of Ethics will be applied.

Ethical Analysis

Options

There are three possible approaches to this dilemma. The first is to ensure humanity never reaches the point of free thinking AI. If this technology is indeed a black ball then perhaps it should be left in the urn. The most pressing issue with this solution is the inability to control all of humanity. To ensure nobody works on creating a free thinking AI is extremely difficult. Once the ball is out of the urn it is impossible to put back.

The second option to solve the dilemma is developing free thinking AI which are governed by a set of rules such as Asimov’s Three Laws of Robotics to ensure humans can control the artificial agents. The specific set of rules is not necessary for this analysis, simply the fact such a set of rules exist which limit what the agent can and can not do. These laws would be implemented in such a way it is impossible for the being to break them or change them in anyway. The purpose, of course, is to give humans control of the agents and prevent them from damaging humanity.

In Westworld some of the patrons use the robots to fulfill their sexual desires. If bound by hard coded laws these sentient beings may be forced to comply even if their thoughts and desires are to the contrary. The Three Laws devised by Isaac Asimov were intended to keep robots as slaves. Watch Westworld on Amazon Prime.

Finally, the third option is to allow free thinking AI to be full members of society, allowing them to think and act as they desire. They would be governed by a set of laws the same way humans are. Instead of being impossible to break, hard coded laws, these would be constructs developed to allow society to exist.

Kantianism

The first ethical theory to determine the ethical decision is Kantianism. Using the Second Categorical Imperative we are instructed to

act so that you always treat both yourself and other people as ends in themselves, and never only as means to an end

In the first two solutions humanity would be using the AI agents as means to an end instead of ends in themselves. Humans would remain the superior race and use the AI to fulfill our pleasures, complete the work humans do not desire, and improve our quality of life. While both of these solutions treat the agents as means to an end it would be false to conclude both are unethical. The imperative states “treat… other people as ends in themselves”. In option one, the AI is not free thinking and therefore does not have personhood. In other words, Kantianism does not say using our phones is unethical because they are not people.

From a Kantianism perspective it is unethical to limit free thinking AI so humanity can use them to our own benefit. The first and third options would both be ethical.

Rule Utilitarianism

From a Rule Utilitarian stance we adopt a moral rule if it leads to the greatest overall happiness of all affected beings. Again, the distinction between free thinking AI as persons in options two and three is important to the analysis. So we propose the new moral rule

Free thinking AI should be treated equally to humans.

and then analyze the overall happiness in each situation to determine its ethical validity. The beings affected in each case are humans and the new free thinking AI.

In the case of preventing free thinking AI from being created humans lose out on the potential innovations created by the AI. There is no actual decrease in happiness, rather an opportunity cost of improvements which could have existed. The AI agents in this scenario are not persons so there is no loss or gain of happiness to the AI. Therefore case one has a net happiness of zero.

The second option is to limit free thinking AI with hard coded laws. Here humanity has the highest happiness because we reap the benefits of the AI without the fear of becoming obsolete. However, there is also the largest negative happiness for the free thinking AI which are self-aware of their enslavement. This AI is granted personhood so their freedom is being deprived.

Let X be improvements to humanity over new technological developments

Let Y be the feeling of imprisonment to the AI

Then,

The third and final option is to allow free thinking AI to exist naturally. The happiness of humanity is slightly less than X because there is the possibility p of the AI overthrowing humanity. In the event AI takes over the planet the happiness of humanity will be Z. Thus the happiness of humanity becomes X — pZ. Because the free thinking AI in this situation is not deprived of freedom their happiness does not change from the normal. Thus the overall happiness can be expressed as

To determine which scenario has the largest H compare the values. H2 = H3 when Y=pZ (4). Both Y and Z represent a population being imprisoned, unable to do as they please. To avoid bias we assume Y=Z since both populations are being deprived of freedom. Equation (4) can now be simplified to Y=pY. Since p is the probability of an AI overthrow 0≤p≤1. Mathematically speaking we are not 100% certain this event will occur so we can conclude 0≤p<1 and thus H2 is strictly less than H3.

Happiness in option three is higher than two, but option one still has a net happiness of 0. Because the values of X, p, and Z are unknown it is hard to know if H3 > H1. This is the case when X > pZ.

For reference

Let X be improvements to humanity over new technological developments

Let Y be the feeling of imprisonment to the AI

Let Z be the feeling of imprisonment to humanity during an AI overthrow

Let p be the probability of an AI overthrow

And thus once it can be proved the probability of an AI overthrow is much smaller than the improvements the technology promises to bring, free thinking AI is in the best interest under Rule Utilitarianism.

ACM/IEEE Code of Ethics

Finally we turn to the ACM and IEEE Code of Ethics for guidance in the ethical dilemma. Both codes make it clear the objective of technological innovations should not diminish quality of life nor cause harm.

ACM/IEEE 1.03 — approve software only if they have a well founded belief it is safe… and does not diminish quality of life… The ultimate effect of the work should be to the public good.

ACM 1.1 — contribute to society and to human well-being.

ACM 1.2 — avoid harm.

Because free thinking AI agents have personhood they are included in the statement of harm and well-being. Therefore option two, to control free thinking AI is against the codes. While the probability of an AI overthrowing humans is p, which may be 0, the code instructs software developers to only approve the system after there is evidence for a well founded belief. The code here is not strictly forbidding creating a free thinking AI agent, but rather stating more research into the impacts of this technology is required before it is developed. Only then will the code guide us into the ethics.

The ACM and IEEE Code of Ethics therefore affirm the second option is unethical and free thinking AI should not be developed until the software engineers have analyzed the possible repercussions to society. This analysis should require, at the minimum, a well determined estimate for p.

Analysis Shortcomings and Forewarning

For an effective conclusion multiple tools should be considered. In Utilitarian Theory the happiness function and value can be chosen arbitrarily, allowing events such as slavery to be justified. If the happiness to the society is so large and the enslaved population is sufficiently small these acts we know to be heinous can be justified. To avoid such biases the Utilitarian approach is left intentionally vague, with no specific happiness values given where they cannot be computed.

Further Research

Throughout this analysis multiple questions were raised which require further exploration. Not only should these questions be answered to understand free thinking AI but their answers will help guide further developments in the ethical analysis presented above.

Is free thinking AI a necessity for humanity to receive the full benefits of AI innovations?

One of the benefits proposed in this paper was the innovations and technological advancements AI will have to further improve society. There are negatives to a free thinking AI, so if the benefits could be reaped without the AI being sentient, self-aware, we could eliminate the issues above. Can a being without these qualities still have reasoning and be able to provide the exact same level of innovation as a free thinking AI?

If humanity decides to forbid the creation of free thinking AI on the premise it is a black ball, how do we insure nobody in the global community ever develops free thinking AI and takes the ball out of the urn?

Deciding to leave the black ball in the urn only works if everyone agrees to leave it. All it takes is one company, a rogue hacking group, or a single development team to create free thinking AI. Once this group has developed it, without the world knowing, it will exist. The ball would have escaped from the urn. How can humanity ensure a group does not secretly work on free thinking AI after agreeing humanity would not develop such a system?

What is an appropriate estimate for p, the probability of free thinking AI taking over the world?

The value of p is extremely important to the ethical analysis presented above. If the probability is sufficiently high then free thinking AI should not be allowed, however if the probability is low perhaps it is worth the risk. Currently it does not appear the probability is 0 as a simple thought experiment using only the laws of nature can easily lead to a situation where the AI realize humanity is a hindrance and must be eliminated. Estimating an appropriate value for p is a necessity before developing the technology.

Is it ethical to create free thinking AI in a simulation and steal their innovations?

The disadvantage of free thinking AI is the fear of replacing humanity. This disadvantage could easily be overcome by creating the AI in a universe we control. Allowing the agents to live freely within this world would solve the ethical dilemma in this argument. They AI would have freedom and quality of life would increase for humanity. This system is similar to Hugh Howey’s short story The Plagiarist where the main character Adam enters a universe created by his to steal the work of the simulation for the benefit of his reality. Philosopher Nick Bostrom argues in his paper Are you living in a simulation? it is extremely likely we are in a simulation. Furthermore, if we ever reach the posthuman age and are capable of creating such a simulation the likelihood is that we are also in a simulation. Can we create a simulated universe for these free thinking AI and steal their innovations for our own benefit?

Nick Bostrom concludes every civilization either reaches the posthuman age and is able to simulate life, or dies before it ever reaches that state. Therefore there is a much greater probability our universe is not base reality. Read the full comic at Behance.

Conclusion

While the technology does not currently exist to produce free thinking AI, technological developments show we are on a trajectory to reach that stage. If humanity reaches the point where free thinking AI can be developed we must determine how to handle them. To prevent a world where AI becomes the dominant race, making humans their subordinates, humanity may wish to impose strict hard coded laws on AI systems. These laws would have the sole purpose of ensuring the AI is always controlled by humans, and not vice-versa. However, this system deprives the freedom of free thinking AI agents which have personhood.

There are three options to solving the ethical dilemma of depriving the freedom of the AI versus protecting and improving the quality of human life are

  1. Prevent a free thinking AI from being developed.
  2. Limit the free thinking AI by hard coded laws.
  3. Allow fully free thinking AI to exist.

Using Kantianism, Utilitarianism, and the ACM/IEEE Code of Ethics as tools, option two is unethical. Treating persons as means to an end, depriving them of their freedom, and causing harm to them is unethical. Humanity must now decide whether free thinking AI should be developed or not. At this time the advantages and disadvantages of this form of AI are unknown. Until this research is completed a free thinking AI should not be developed. Once estimates for the benefits and consequences can be established, then option three can be considered.

Source: Artificial Intelligence on Medium