Blog

ProjectBlog: How can you as CS future graduates be prepared in developing AI ethically when it will profoundly…

Blog: How can you as CS future graduates be prepared in developing AI ethically when it will profoundly…


I strongly suggest that a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence should be made. For example, the European Union had create a framework recently which addressed the problems that will affect our society,(Vincent, 2019).

In the guideline, it was stated that there are seven requirements which future AI systems should meet.However, in this article, I’m only going to highlight on four points that were mentioned. First of all, human agency and oversight. Although the system in the future might be smart enough to help human solves a lot of problems, human should be still able to oversee every decision that the system makes. For example, if an AI system is able to diagnose patients, the system shouldn’t make the final decision on its own but show the diagnosis to the human doctor for him to make the call. The diagnosis made by the system should works as the guideline to help the doctor just in case if the system and the doctor have different diagnosis on the patient, the doctor can have another chance to diagnose the patient again,(Vincent, 2019).There are still a lot more example we could see for example in the movies, Iron Man. Tony Stark, the Iron Man built an AI system, named Jarvis which helped Stark in completing almost all his tasks.

Just A Rather Very Intelligent System (J.A.R.V.I.S.) was originally Tony Stark’s natural-language user interface computer system, named after Edwin Jarvis, the butler who worked for Howard Stark. Over time, he was upgraded into an artificially intelligent system, tasked with running business for Stark Industries as well as security for Tony Stark’s Mansion and Stark Tower.

Technical robustness and safety. The system should be secure, accurate and be reliable. In the case of self-driving cars, like the self-driving trucks promised by Tesla’s Elon Musk. It was said to lower risk of accidents but on the other hand, what if the system was breached by some bad guys and we surely don’t want cars to crush into each other or passengers. Therefore, the system created shouldn’t be easily compromised by external attacks, strong security should be implemented in the system before the system go into the market,(Vincent, 2019).

Elon Reeve Musk FRS is a technology entrepreneur, investor, and engineer.He is the founder, CEO, and lead designer of SpaceX; co-founder, CEO, and product architect of Tesla, Inc.; co-founder and CEO of Neuralink; founder of The Boring Company; co-founder and co-chairman of OpenAI; and co-founder of PayPal.

Another point that I’m going to talk about is Transparency. It was stated in the guideline such that Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In the other words, operators should be able to explain the decisions their AI system make,(Vincent, 2019). We human or at least the maker should be able to know and understand everything or steps that the system made to prevent the system did something out of human control. We build AI to help us and improve our life, not building a monster that we never know what it is going to do to us.

Arnold Schwarzenegger as the Terminator, a cyborg assassin.

Last but not the least, Environmental and societal well-being. AI system should be create to give positive effect or social change and also be ecologically responsible,(Vincent, 2019). For example, we should never create an AI war machine that learn to kill thousands of people like Terminator or there might be some evil people tries to build a robot that learn to rob banks or do villain things for the bad guys. In the movie, Alien: Covenant, two different versions of androids were created. The first version of the android, David was said to be too intelligent and a little too sensitive for his own good. He even thought himself as the god and create life of his own thus eventually making him the evil guy in the movie. While the other version, Walter, a loyal and emotional-less descendant of David. Walter was devoid of human characteristics or emotional contents that are programmed into David, (Jason, 2017). Therefore, it was suggested that engineers need to pay close attention to the design of feelings-related algorithms of AI beings. We have to let it evolve, but you have to be really sensible about how you control it. That’s the problem, really. If it gets out of the box, to the countries or societies that should not have it, then we will have problems.

Michael Fassbender portrayed a dual android roles in Alien: Covenant.

Sources:

Vincent, J. (2019). AI systems should be accountable, explainable, and unbiased, says EU. [online] The Verge. Available at: https://www.theverge.com/2019/4/8/18300149/eu-artificial-intelligence-ai-ethical-guidelines-recommendations [Accessed 8 May 2019].

Jason, J. (2017). Michael Fassbender On His Dual Android Roles In Alien: Covenant. [online] Movies. Available at: https://comicbook.com/movies/2017/04/21/michael-fassbender-android-roles-alien-covenant/ [Accessed 14 May 2019].

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social