Blog: Machines Smarter Than Us: The Final Frontier?
Human Beings have evolved into one of, if not the most, intelligent species currently known to inhabit the Solar System. We have gone from first discovering fire to planning and executing a manned mission to the Moon and back. Now, we face one of our greatest challenges: teaching the machines and programs that we have built to solve problems and achieve goals as well as we do. That is, developing Artificial Intelligence.
I firmly believe that Artificial Intelligence is our final frontier. Once we have created machines that can think on level terms as us, we will be able to open doors that lead to opportunities and discoveries we never knew even existed. However, there are potentially harmful effects that can arise by creating machines with the ability to think and make decisions for themselves. How intelligent machines will truly impact and affect our lives remains to be seen. It is important to ask whether we should wholeheartedly embrace the development and advancement of Artificial Intelligence or should we be concerned about the path we have chosen to follow.
The term ‘Artificial Intelligence’ was first coined by John McCarthy, a professor emeritus of Computer Science at Stanford University in the early 1950s (Andrew Myers). However, research on the topic started as early as after World War II when a number of independent people started working on intelligent machines. English Mathematician Alan Turing can be considered as one of the first to do so when he first gave a lecture on intelligent machines in 1947. Turing was also the first to decide that Artificial Intelligence was best researched through using programming computers and software rather than developing machines and other hardware. By the late 1950s, there were several researchers in the field of Artificial Intelligence, and almost all of them had based their research on programming computers (“What is AI? / Basic Questions”).
Today, Artificial Intelligence is no longer seen as some fantastical, avant-garde concept. It is now a technological reality and is something we must be ready for as it penetrates our lives. Artificial Intelligence is already expected to revolutionize the global economy and international trade. Businesses that have adopted this technology have begun using it to perform or augment tasks that were traditionally performed by humans. Examples of this include replacing repetitive tasks such as data entry with robotic process automation, using Natural Language Processing (NLP) to analyze and generate meaning from human speech, and using Machine Learning to process, analyze, and act on several terabytes of data (MIT Technology Review Insights). Artificial Intelligence (henceforth referred to as AI) is set to usher in radical and unprecedented changes in the way people live and work.
According to a report by the McKinsey Global Institute, by the year 2030, 70% of all companies will have adopted some form of AI technology. AI has the potential to deliver additional global economic activity worth around $13 trillion by 2030 which is about 16% higher than the cumulative Gross Domestic Product today (Bughin, Jacques, et al.). This impact on productivity growth driven by the adoption of AI technologies is affected by a number of different factors including labour automation, innovation and disruption, and new competition. However, although AI is expected to greatly boost global economic activity, it is possible that it could widen the gaps among developed and undeveloped countries, small and large companies, and the workers themselves. That is, this gain in overall economic productivity could be severely uneven. Leaders of AI adoption that are mostly located in developed countries could potentially increase their lead over developing countries by capturing an additional 20 to 25 percent in net economic benefits, compared with today, while developing countries might be able to capture only 5 to 15 percent (Bughin, Jacques, et al.). Moreover, in developed countries, average wage rates tend to be high, which means that there is more incentive to substitute labour with machines than there is in developing countries where average wages tend to be lower (Bughin, Jacques, et al.). On a similar note, developing countries also may not have the incentive to adopt AI technologies as they tend to have several other methods to improve their productivity. Next, it is possible for a widening gap to unfold at the level of the individual workers that are employed by large companies. As touched upon earlier, most large companies are implementing AI technologies to make redundant or repetitive tasks more efficient. Thus, the demand for jobs could shift away from these repetitive tasks towards jobs that are socially and cognitively driven and require more digital and technological skills. Job listings that require little to no digital and technological skills or are characterized by repetitive activities could potentially experience as much as a 10 percent drop in their share of total employment by 2030. At the same time, jobs that are characterized by non-repetitive activities or require good digital skills could rise by over 10 percent in terms of share of total employment (Bughin, Jacques, et al.). This transition could severely impact the wages of workers, as explained by simple economics. As the demand for low-skilled jobs falls, the average wage rate for these jobs will also fall. On the other hand, as the demand for more high-skilled jobs rises, the average wage rate for these jobs will also rise. Thus, the large-scale adoption of AI technologies could widen the wage gap between workers in the coming years.
So, in terms of global economic productivity and growth, AI technologies have the potential to greatly improve the efficiency of businesses. As discussed above, adoption of AI technologies can potentially contribute $16 trillion dollars to global economic activity. However, this comes at the cost of severe disruptions in employment and wage levels, the economies of developing countries, and small businesses that are just entering the market. Thus, I believe that careful planning and execution of AI technologies at a global level is needed to minimize the negative effects on the global economy while maximizing the various boosts to economic productivity and growth.
Moving on, the adoption of AI technologies can affect international trade between countries in various ways. For instance, AI technologies such as Machine Learning is being used to better manage risks in company supply chains and to make improved predictions of future trends, such as changes in consumer demand (Meltzer, Joshua P). Furthermore, improvements in inventory and warehouse management, as well as shipping and logistics can greatly improve the ability of companies to deliver their products to their customers. This is accompanied by the ability of AI technologies to greatly improve trade negotiations. Economic trajectories of each negotiating partner under different assumptions such as outcomes contingent on trade negotiations, how these outcomes are affected in a multiplayer scenario, and the predicted responses of countries not taking part in the negotiations can be better analyzed using AI (Meltzer, Joshua P). However, although AI can make significant contributions to international trade and negotiations, certain causes for concern must be addressed relatively soon in order to ensure that these contributions do not come at the expense of customers and their privacy. As discussed above, the primary use of Machine Learning is to make improved predictions of future economic trends such as consumer demand. These predictions are made by analyzing and interpreting current and historic consumer data and purchasing patterns. Having access to a consumer’s purchasing patterns can be seen as breaching his or her privacy. Thus, careful laws and regulations must be enacted to ensure that companies that are collecting and using this data do not use it harmful intentions. Strong privacy will be required if people are going to be willing to trust their lives online, including providing a significant amount of personal data for the purpose of AI learning. The key challenge to solve here is designing privacy rules that do not create any unnecessary restrictions on access and use of data (Meltzer, Joshua P). Thus, companies will be able to collect sufficient data to make accurate predictions while customers providing this data will not go through a breach of their privacy. Moving on, the incorporation of AI technologies in various industries will almost certainly require the development of a new range of safety and privacy standards. For example, once AI is developed to the extent that autonomous vehicles can be used for personal use as well as for international shipping and cross-country deliveries, a new range of technical and safety standards and vehicle manufacturing standards must be developed (Meltzer, Joshua P). Finally, the development of AI technologies will result in a new set of intellectual property issues with international trade implications. As noted, AI requires the use of large sets of input data. It is possible that this data will often need to be copied and edited for further use. Thus, if not inspected carefully, this could result in the unauthorized copying of thousands of protected works (Meltzer, Joshua P).
Therefore, as with the impact of AI on the global economy, AI also has the potential to revolutionize international trade. Improved predictions of consumer patterns and demand, inventory and warehouse management, and shipping and logistics can result in a vastly improved supply chain for trading exporting goods and services. However, most technologies in this field require the use of large amounts of consumer data which can raise several privacy issues. If not dealt with in an appropriate manner, the privacy of tens of thousands of customers can be breached by companies and private consumer data can be acquired and used with the intent to harm.
I feel that the benefits gained from implementing Artificial Intelligence on a wide scale in the corporate world greatly outweigh the potential downsides that may emerge from doing so. This is because the issues and concerns that accompany the adoption of AI technologies can be easily mitigated if addressed at the right time. I am confident that our governing bodies will take the steps to ensure a hassle-free implantation of AI across all industries. However, one aspect of Artificial Intelligence that is yet to be discussed and almost impossible have a correct answer for is the ethical and moral issues that arise with developing machines that simulate, or rather possess, intelligence equivalent to human standards.
Imagine a scenario where a knowledgeable observer was interacting with a human and a machine through teletype. The human would try to persuade the observer that he or she was indeed human while the machine would try and fool the observer into thinking that it was human. If the machine could successfully pretend to be a human and fool the knowledgeable observer, then that machine would be considered intelligent. This Turing Test for machine intelligence was developed by none other than Alan Turing (“What is AI? / Basic Questions”).
Currently, it is widely agreed that present-day AI systems do not possess any moral status. Humans may delete, terminate, copy, or use computer programs as they please. The moral constraints we are subject to are all grounded in our responsibilities to other beings, such as our fellow humans, not in any responsibilities to the systems themselves (Bostrom, Nick, et al., 6).
However, while it is fairly consensual that present-day AI systems have no moral status, it is still unclear exactly which attributes ground moral status. Sentienceand Sapienceare two criteria that are commonly proposed as being importantly linked with moral status, either separately or in combination. They are defined as follows (Bostrom, Nick, et al., 6):
Sentience:The capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer.
Sapience:a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent.
These definitions lead to the idea that an AI system will have some moral status if it has the ability to feel pain. We know that it is morally wrong to inflict pain on others and, if the system feels pain, then it is our moral duty to not harm that sentient system. But, consider a machine that manages to fool a knowledgeable observer into thinking that the machine is human. That is, a machine that has passed the Turing Test of machine intelligence. This AI system will have achieved sapience and would have a complete moral status.
This moral assessment leads us to the Principle of Ontogeny Non-Discrimination: “If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status (Bostrom, Nick, et al., 8).” If one notices carefully, this principle already applies to humans. Historically, people have been discriminated by caste, bloodline, or religion (that is, how they came into existence). Today, we treat everyone with the same moral status. So, if we were to apply this same principle of non-discrimination with regard to ontogeny, then many of the potential questions that may arise with sapient or sentient machines could be answered by applying the same moral principles that we currently use in society. That is, if we were to treat an AI machine in the same way we would treat another human in a given situation, the problem of developing ethics and rules for intelligent machines would become much more simplified.
Even if society accepts this approach to the ethics of intelligent machines, however, a new set of questions may arise. This is because intelligent machines can have very different properties from ordinary human or animal minds. It is important the we consider how these properties would affect the moral status of artificial minds and what it would mean to respect the moral status of such minds (Bostrom, Nick, et al., 9).
Humans often talk about the potential capabilities of self-aware machines and the miracles they could achieve by using these machines. However, we often tend to forget that with intelligent machines comes another factor, motive. This is more clearly explained by the Fallacy of the Giant Cheesecake: it is possible for a superintelligence to build cheesecakes the size of cities However, will this superintelligence wantto build these giant cheesecakes (Yudkowsky, Eliezer, 9)?
Eliezer Yudkowsky gives the following chains of reasoning, all exhibiting this Fallacy of the Giant Cheesecake (Yudkowsky, Eliezer, 9):
· An unfriendly Artificial Intelligence with sufficient power could wipe out any human resistance and lead to the extinction of mankind (and the AI would decide to do so). Thus, we should stop the development of AI.
· A friendly Artificial Intelligence with sufficient power could develop miraculous medical technologies that are capable of saving billions of human lives (and the AI would decide to do so). Thus, we should continue the development of AI.
Thus, through this Fallacy of the Giant Cheesecake, the notion of friendly and unfriendly AIs arises. That is, if we were able to develop intelligent machines and use the Principle of Ontogeny Non-Discriminationto smoothen out the ethics and moral considerations of intelligent machines, we may still reach a point where we are no longer able to control the very motivesof the machines themselves. It cannot be ruled out that an intelligent machine may think our ethics and morals are flawed and so it decides to abandon these ideals and generate its own. At that point of time it may well be possible that the machine possesses the capability to rewrite its own source code to generate these new ideals it may be thinking of.
Although this is speculation at this point and these concepts are still uncharted waters for us, we must ensure that the development of AI continues in a controlled and sustainable pace so that no unintentional consequences may arise. I believe that, with the direction we are headed in, it will not be long before our ethics and morals come into uncertainty when dealing with machines that are self-aware. Society’s ideals may clash with what the machine chooses to think is right and an AI that was once regarded as friendly may decide to rewrite its programming to resonate with its beliefs rather than the flawed beliefs of society.
In conclusion, Artificial Intelligence, something that was once laughed about and thought to be impossible, has now slowly begun revolutionizing how the world functions. Businesses have begun using these technologies to greatly improve their global supply chains and overall economic productivity. Meanwhile, governments and other large organizations have also begun using Artificial Intelligence to further enhance trade negotiations. However, these benefits come at the cost of severe disruptions in employment and wage levels, the economies of developing countries, and small businesses that are just entering the market followed by considerations of customer data privacy. While the corporate world continues to adopt Artificial Intelligence, humans must also begin thinking about the various ethical and moral considerations that will arise when we finally develop machines that are sentient and sapient. Machines that were built for the purpose of taking us to Pluto and beyond could very well turn on us. If we are not careful about how we treat these machines and respect their ideologies, we could potentially chart a path for our own destruction. Taking all these factors into account, I still firmly believe that Artificial Intelligence is our final frontier. It is my hope that humans will have the capability to treat intelligent machines the same way they treat other beings. If we successfully integrate these machines into society, the probability of them turning into unfriendly AIs goes down drastically. So, I end this discussion by saying that we should continue to innovate and develop Artificial Intelligence while carefully monitoring its growth to ensure that development occurs in a predictable, controlled, and sustainable manner.
“What is AI? / Basic Questions.” Stanford University, http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html.
Bostrom, Nick, et al. “The Ethics of Artificial Intelligence.” Machine Intelligence Research Institute, 2011, https://intelligence.org/files/EthicsofAI.pdf.
Bughin, Jacques, et al. “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy.” McKinsey & Company, Sept. 2018, www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy.
Meltzer, Joshua P. “The Impact of Artificial Intelligence on International Trade.” Brookings, Brookings, 13 Dec. 2018, www.brookings.edu/research/the-impact-of-artificial-intelligence-on-international-trade/.
MIT Technology Review Insights. “The State of Artificial Intelligence.” MIT Technology Review, MIT Technology Review, 25 Feb. 2019, www.technologyreview.com/s/612663/the-state-of-artificial-intelligence/.
Myers, Andrew. “Stanford’s John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84.” Stanford University, 25 Oct. 2011, news.stanford.edu/news/2011/october/john-mccarthy-obit-102511.html.
Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” Machine Intelligence Research Institute, 2008, http://intelligence.org/files/AIPosNegFactor.pdf.