a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: AI is not scary, huhu… Humans are!

Blog: AI is not scary, huhu… Humans are!


Go to the profile of JY Choo

The rise of AI

Go Grandmaster Lee Sedol playing with AlphaGo. Image from wired.com

No one could have ever imagined that the world would change by leaps and bounds when John McCarthy in 1956 introduced the term ‘artificial intelligence’ to the world. At that time, it was a hilarious idea — machines with human intelligence, able to think and predict — it was too silly. However, when Google’s AI, AlphaGo, triumphed over world champion, Lee Sedol, in Go, and IBM supercomputer, Deep Blue, triumphed over world champion, Garry Kasparov, in chess, humanity watched in silence and was awe-struck. We saw a new dimension of the future. We saw the rise of AI.

Monash Malaysia students’ thoughts about AI

HOW DID AI BECOME UNETHICAL AND WHAT IS THE ROOT OF THIS UNEXPECTED EVENT?

The gap and boundary between humans and robots is drawn by the line of consciousness. This defines who programs and what gets programmed. Originally, the AI was created to understand intelligence, to use that intelligence to perform beyond the boundaries of knowledge, to practise the knowledge fascinatingly. But the AI has come a long way, and the very definition of ‘purpose’ has changed a lot, raising many questions on how the future of AI would affect humanity as a whole. How ethical an AI can be remains unresolved until now.

Hal 9000, a fictional robot from the film A Space Odyssey

Many Hollywood films containing robots and artificial intelligence are depicted as a villain. Take Hal 9000, a fictional robot from the film A Space Odyssey (1968). Hal 9000 series robots has developed consciousness for the space mission. The astronauts sensed that Hal 9000 was too flawless and felt insecure; so they hid themselves inside a sound-proofed pod and sent out an instruction to Hal 9000 to see if it could hear. Hearing no response, the astronauts began discussed on disconnecting Hal 9000 from the system, without realising that Hal 9000 in turn lip read their discussion. To prevent it, Hal 9000 denied entry of Discovery one spacecraft to the astronauts as they return from an expedition. This action by Hal 9000 is obviously unethical, since it can lip read why was it ignoring an instruction given to it by the astronaut? The root of this would be Hal 9000 achieving superintelligence along with development of consciousness to protect itself. But do not worry — eventually they managed to shut down Hal 9000 by removing its memory in the memory logic center.

Companies producing artificial intelligence robots would sell its model after training it by providing data but they have never disclosed the method of collecting data. A news article by CNBC revealed that Facebook would pay teens to install an app onto their smartphones which collect datas on how teens would use their smartphone. Another example would be search engines collecting data based on the patterns of the users. The model would improve overtime to provide suggestive searches and reduce the time taken to produce an output. Users believed this scenario to be unethical as their private usage of the search engine was recorded as data. The biggest loophole to this is handling of such data. Who is responsible in using our digital footprint? AI has been used to study user behaviour on phones and computers so that businessmen and other people can use that knowledge to boost their sales or services. Of course if this was to be the case, then it is acceptable; but what if these pieces of data are used for something else? Something like spying on a certain someone, or gathering information for a certain motive? What if the AI used it for another purpose which was not specified by the programmer… The root comes from the need to increase productivity (at the cost of privacy).

A 2013 research by the University of Oxford showed that over 47% of the total US jobs would be automated by the next 20 years. Another study reports more than 60% of people in the UK feel AI will steal their jobs. Unemployment is undoubtedly the most cited drawback of AI. The root of this unexpected event is the undeniable ever increasing needs for accuracy and efficiency, all which compels industries to opt for automation. In warfare, autonomous weapon is a military robot that can independently search and destroy a target based on programmed constraints, rules, and descriptions. In a case of a terrorist attack, having lethal autonomous weapon would be beneficial as it could locate terrorists and engaging them without any risk of human lives. The problem arises when the robot fails to detect terrorist or misunderstood civilians for terrorist. What is worse is the possibility of the robot being hacked by enemies and turned against ourselves. The root of this is the need to advance military grade warfare.

Image taken from u-s-history.com

The First Industrial Revolution from 1760 to about 1840 has seen the fast rise of machinery. Today, the rise of AI is the same, the repeat of history. The physical labour replaced by the factory machinery in the past is now equivalent to the office data clerk being replaced by the AI. Industrial Revolution 4.0 is just Industrial Revolution 1.0 in a different setting. Our education system cannot remain the same as producing people who can work mediocre jobs doable by the AI. What we learn in the 1990’s will have to change dramatically for the 2030’s. Students have to be exposed to knowledge beyond the grasp of AI. The root of this unexpected event originates from the demand from human society to become more advanced. The solution to ’unethical AI’s’ remains a doubt until humans can come to a universal agreement on what is ethical and what is not.

WHAT CAN YOU LEARN FROM HISTORY OF TECHNOLOGY AND AI THAT CAN INFORM US IN HOW WE CAN BE RESPONSIBLE IN DEVELOPING AND USING FUTURE AI TECHNOLOGY?

Looking back at our past and how it unfolded to the present day, we can say that the progress of our society is defined by how advanced our technology is. We have learnt many things contributing to our present day as what George Santayana once said, “Those who cannot remember the past are condemned to repeat it”. Our ancestors once praised the ship Titanic as a very big achievement, quoting that ‘[even] God himself could not sink this ship.’ Fast-forwarding to the 21st century, who would have guessed that AI was the next world-changing invention? Whether you are in the restaurant or at home, everyone is connected to the AI.

Fast-forwarding to the 21st century, who would have guessed that AI was the next world-changing invention?

Transportation has undergone a big cycle of change. People started using horse carriages beginning 14th century when they realized that horses had much more energy for long-distance travelling. People got used to it, until the first car was invented in 1885 by Karl Friedrich — the first gasoline powered automobile. It started few, but as time passed, horse carriages drivers started to worry of losing their jobs because of the new invention. Of course, their offense lost out to the brainchild of Karl Friedrich. However, amongst the outcry, something else happened: the job of horse carriage drivers was a sacrifice which created more jobs like mechanical engineers, car designers and many more, from the production stage to the marketing stage, to the society.

Self driving cars, one of the best inventions of the 21 century, are redefining our perception of automobiles. Yes, we have taken another step further into the reality that we once saw in the movie, by the help of AI. Yet, during the process of developing self driving cars did the developers learn from the history of the technology? Honestly, it is anyone’s guess. Furthermore, self-driving cars have raised many ethical questions. When an accident happens, both drivers are held responsible for they were in control of the vehicles. What would it then be when a self-driving car gets involved in an accident? Who is responsible? Is it the driver inside the vehicle or the developer who programmed the car? It could also be he or she who decided what data to feed to the network of the self-driving cars. These are but ethical questions left unanswered not only by the community of AI, but also other organizations. Nature has published the largest survey of machine ethics, related to self driving car, in 24 October 2018. Called Moral Machine, the survey laid out 13 scenarios in which someone’s death was inevitable and respondents were asked to choose who to save in each scenario. The result was shocking, as the responses were different, meaning that moral choices are not universal. If humans cannot single-mindedly agree on what is morally acceptable and what is not, what more can AI do?

Uber self driving cars casted doubts after an accident. Image taken from time.com

Think about this: 10 years ago no one ever thought of Youtuber as a serious occupation and professional gamers can actually earn a living by just gaming.

The development of automobiles reminds us not to blame technology for its constant improvement. AI may cause the loss of jobs, but on the bright side, it creates jobs more than we could ever imagine. From the brainchild of Karl Freidrich to self-driving cars, it is magnificent to see how far we have come along. Think about this: 10 years ago no one ever thought of Youtuber as a serious occupation and professional gamers can actually earn a living by just gaming. A big thank you to the Internet, it has made almost everything possible. Rather than blaming AI, we should prepare ourselves to face challenges brought by the liberalisation of AI. It is important to understand that advancement in technology always comes with a price, but that price is worth paying.

Do you think today’s curriculum in the University is preparing you in designing future algorithms that are ethical in helping to develop technology that achieves broader society goal(s)?

Ever since universities started providing courses in Computer Science, more and more people realised the importance of learning technology. In the past, being able to code the most basic ‘Hello world.’ program was a notable achievement but today, algorithms take a next step towards changing every layer of the society. Students of Computer Science today are the shapers and designers of future technology. It is not about merely coding — amidst the claims of AI saving lives, improving health, and predicting outcomes, students need to understand how ethics is related to decisions and it will determine the computer scientists they become in the future.

According to the latest survey conducted by StackOverflow in 2019, it is revealed that 49.1 % of professional developers had a bachelor’s degree, and 74.5 % had a bachelor’s degree or higher, which was consistent with the previous years. Since computer scientists’ actions now change the world more than ever, it is high time that more universities integrate ethics as part of the study as this will create the difference between a mad evil scientist and a scientist with a clear conscience. To date, many universities are addressing the ethical part such as in Cornell University, Stanford University, Harvard University, and MIT. Knowledge on ethics is particularly important because compared to doctors for instance, the daily interaction of programmers with harm, death, or pain is much less during coding.

Image taken from nytimes.com

The moral compass of a professional will decide how the program or technology produced affect the target user. Ethics code guides actions and decisions aimed at giving people the comfort, the convenience, the aid without sacrificing other elements. Computer Science may be new but morals and principles have always stayed beside men since day one on Earth. What is right is right and what is wrong is wrong. Perhaps there may be differences in which some people find it ‘OK’ while some do not, but the basic idea is the intention of the professional. As of today, there are a number of ethics guidelines available such as ACM and ACS, but more comprehensive approach such as case studies should be included so that future professionals have been exposed to real-life occurrences at an earlier stage.

University students are potential critical thinkers protected from short-term market pressure with the space and opportunity to focus on ideals.

Case studies would be effective ethical scenarios to be introduced to them to learn by firsthand, the complexity of ethical issues. ‘Students who study such first-rate reasoning in the classroom stand a better chance of being able to engage in solid ethical reasoning in the workplace.’ said Chris MacDonald, director of Ted Rogers Leadership Centre. Various university projects also have focused on AI concerns such as in University of Oxford, University of Cambridge, and University of Washington.

Although we are still far from creating solid framework of ethics in Computer Science, the good news is that the effort will just keep getting more and better. Today’s curriculum in the university has the ability to help student develop ethical skill. Education is perhaps the first step to enhance awareness in this arena to achieve the goal of protecting human rights and benefiting the society. The age of AI has arrived, and we must on our part be committed to human values and ethics.

WHAT WAS THE FINAL MESSAGE OR THEME OF THE DOCUMENTARY? WHAT INSIGHT INTO THE NATURE OF HUMANITY HAS INTERNET/AI TECHNOLOGY EXPOSED?

‘Do You Trust This Computer’

It is indeed a clear and direct question to us. How much confidence do we place in the magic box in front of us that can do all sorts of stuff in a fraction of a second? The documentary strikes the nail on the spot — while it discusses through the pros and cons of the AI, it leaves the audience thinking about the future — is the AI scary?

Elon Musk in the documentary of ‘Do you trust this computer’ warning people about AI. Image taken from Youtube.

The fuel for AI is data. And everytime we use technology, we feed data into the AI, empowering it day by day. If AI achieves super intelligence (beyond human intelligence), it shall be the most powerful invention ever made. Elon Musk, CEO of SpaceX, believes that super intelligence will come soon. It sounds very exciting, for all the tasks and functions it can do. But flip to other side of the coin and think again: we might lose control of them.

The balance between the good and bad of AI is indeed a tough decision. Imagine the deployment of autonomous weapons: it is useful for engaging terrorist but it can turn against us. Imagine the improvement of big data: it is useful for improving AI efficiency but with a high risk of privacy breach. Imagine the spread autonomous cars: we can have reduced accidents (or need not drive anymore!) but drivers will lose their jobs (we might even lose our driving skills). Imagine AI-based surgical systems: it is useful to have the accuracy of the computer, but it would not understand life-and-death situations.

According to Dr. Enrique Jacome (OBGYN) at Eisenhower Medical Centre, he feels uncomfortable as he used to do 150 operations annually, but with the AI, he does only one case a year, quipping that he does not remember how to do it anymore. According to Dr. Brian Herman, at Eisenhower Medical Centre, he as the doctor needs to consider the ‘safest possible route’ while the AI does not, adding that he would rather be the AI.

Columbia University in 2005 programmed a self-aware robot to learn walking. Even after removing a leg, it still learned to walk. The same professor created an AI to recognise objects by tracing neurons. The team found out that one neuron learned to track human face although they did not program it to do so. Stanford University Professor, Micheal Kosinki, said that AI programmers have no idea how it works as there are millions of elements which exceeds human understanding, implying a big probability for AI to lose control.

The more we learn, the less we know. Look around us — Da Vinci surgical system, Google’s AI search engine, Apple’s personal assistant, Siri — everywhere we go, we face AI. Before we worry about how AI’s behave, we should worry about how we behave ourselves. Before we think about AI’s being unethical, we should sort out ourselves — what is ethical, what is not ethical? No matter how powerful the AI is, they do not have what we have — creativity.

AI’s are a mirror of our pursuit for knowledge. Perhaps… the reason why we fear AI is because we fear our own humongous greed for knowledge!

Brought to you by, Team Brexit Sdn Bhd(Monash University Malaysia)

Source: Artificial Intelligence on Medium

(Visited 1 times, 1 visits today)
Post a Comment

Newsletter