Blog: What can we learn from history of technology and AI that can inform us in how we can be…
The history and idea of artificial intelligence began when philosophers imagined human thought as a mechanical process and manipulation. Such an imagination brought fourth a new idea that dates back in the 1940s, where the first idea of creating an electronic human brain is inspired by the invention of the programmable digital computer.
In 1956, John McCarthy, a computer and cognitive scientist, held The Dartmouth Conference, which gave birth to the discipline of Artificial Intelligence and made him one of the pioneers among other members of the conference. It brought forth a proposition that discusses about natural language processing, neural networks, theory of computation, abstraction and creativity within the fields of artificial intelligence.
From the proposal, our society began to envision a future where computers do all the work for us. A future where robots exist to help us with daily needs.A future where everything is automated. However, is such a future worth to think about? How can we be sure a dystopian future will not come to pass?
The ethical study of artificial intelligence on how it impacts the society is an important aspect when it comes to developing new AI technologies. The several case studies here shows how artificial intelligence impacts the world and why some of it failed.
Self driving cars such as autopilot feature from Tesla provides an important case study on how developing artificial intelligence affects decision making on an ethical level. For instance, when facing a potential accident, “should a self driving car hit a pregnant woman or swerve into a wall and kill its four passengers?” (Charles Q.Choi, 2018, para.2). Is saving a life of one more ethical than sacrificing four lives to save one?
Our human mind creates reaction, while the mind of an artificial intelligence creates a decision. Decisions are decided by those who created that artificial intelligence. We can create an AI that removes human errors or the bad flaws from human mistakes, but it is difficult to decide between developing an AI system that prioritizes to minimize harm, or one that will look into ethical reasoning more. It is even more difficult to implement both into one system. Nonetheless, the creators of AI have a heavy responsibility in decision making and how AI should act.
AlphaGo is a computer program developed by the DeepMind, and is currently the best Go player in the world (theoretically), beating the top Go player, Lee Sedol, by 4-1. From the outcome of Lee’s match, we can see that the AI has the potential to outsmart the best of human minds. In one instance, Lee looks over his opponent during a heated moment in the match, who was actually one of the developers that was doing AlphaGo’s movement. Lee could not ‘read’ his moves from his facial expressions and body language, as he was not the person who Lee is actually facing. This evoked a sense of fear and despair among viewers, as we soon would realize that AI is mentally and physically stronger than human beings because it does not ‘feel’ the pressure and atmosphere of the situation. There is a danger that if development of AI technology continues, we might cross over into a dangerous and unprecedented path where AI dominates over all the human society. Therefore, there must be a limit or cross industry practices where the developers of AI come together and make sure AI is used and created ethically and responsibly.
Jibo was the world’s first social robot that was created by Cynthia Breazeal. The failure of Jibo reminds us that AI is still ahead of its time in 2010, and is yet to meet the social and hardware capabilities that is desired in the market. Jibo failed because of its limited capabilities and exorbitant price tag due to the expensive hardware needed.
Virtual assistants such as Siri, Amazon, and Cortana also falls into the category of social robots. It is designed to serve basic day to day tasks, but most importantly it connects to people using voice communication technology. As quoted from an article in Voicea, voice is the technology of choice for engaging with these tools because we are now at the stage of technological development where machines can better understand how humans speak rather than vice versa.
Rather shockingly, these voice technologies has the potential to create imitations of other humans through recognition and machine learning. New AI research from the University of Washington has unveiled its newest tool that takes audio files, convert those into mouth movements, and graft them into existing videos to create lifelike and almost impossible to detect videos.
Such technology could be used for other misleading purposes, like generating misleading footage of that person. Therefore, we must be vigilant and responsible to use such a technology for ethical reasons only.
In conclusion, we humans as creators of the AI have a responsibility to decide decisions where it would be ethical and also minimise harm as possible. Therefore the development of artificial intelligence should be taken seriously and should consider all human values and ethics.
Choi, Charles Q.(2018, October 25), The Moral Dilemmas of Self-Driving Cars. Retrieved from: https://www.insidescience.org/news/moral-dilemmas-self-driving-cars
Yao, Mariya.(2017, February 13), Why building social robots is much harder than you think. Retrieved from: https://www.topbots.com/building-social-robots-jibo-anki-cozmo-much-harder-think/
Voiceea, What Exactly Is A Virtual Assistant? Retrieved from: https://www.voicea.com/exactly-virtual-assistant/
Kaplan, Andreas; Michael, Haenlein (2018). Siri, Siri in my Hand, who’s the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence, Business Horizons, 62(1)