a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Robots, Self-Steering Cars, and AI, Oh My!

Blog: Robots, Self-Steering Cars, and AI, Oh My!


On July 1st, 2015, a man was repairing a stationary robot inside of a cage. Suddenly, the mechanical hand grabbed the man and pressed his chest against a metal plate. He died later due to his injuries. This was covered by a TIME’s article around that time. This incident happened in a Volkswagen factory.

This brings up certain questions like: “can robots be guilty of killing humans, however unintentional the death was? And how do we hold them accountable if so?” at least according to the Daily Beast. The article talks about the Volkswagen incident in relation to a movie which came out recently. “Prosecutors are still deciding whether to bring charges and whom they would pursue” during the time when this incident happen in the TIME article mentioned previously. Are robots really out here killing people out of their own free will? Are AIs a threat to humans even economically speaking? A recent documentary also brings up questions like this and mentions the Volkswagen incident from 2015 named The Truth about Killer Robots.

The “Truth” about Killer Robots

The documentary starts out with the host being introduced named Kodomoroid who is a robot and then directs the audiences’ attention to “a small town in Germany” where the Volkswagen factory is located at.

I. Manufacturing is the first part of the documentary. It’s about how robots affect the manufacturing business. It goes into the Volkswagen factory and how there’s less humans working there.

There is also an old interview from Isaac Asimov, writer of I, Robot. The three laws were that robots can’t hurt humans, robots must follow orders of humans as long as it followed the first law, and the third law wasn’t mentioned. I googled it and it’s that the robot can protect itself as long as it doesn’t go against the first and second law. “Scientists say that when robots are built that they may be built according to these laws and also that almost all science fiction writers have adopted them as well in their stories.”

The rest of this part is about how there’s less humans working and that they’re push to the back of the assembly line. This is explored in a postal work and in an assembly line where keyboards, phones, and other technologies are produced, and both take place in an Asian country which isn’t explicitly mentioned.

The second part is called II. Service Sector. This part starts out with a man who was killed in a self-driving vehicle which intersected a semi-truck. Tesla hasn’t given a response yet to what caused the accident.

The rest of this part just talks about robots interacting more and more with humans like the hotel which is completely run by robots.

The final part, III. Final Displacement, starts out with a sniper who has killed five police officers and injured seven others near Dallas, Texas. The man was an Afghan veteran who then decided to hide in a building. The SWAT team tried to enter but the man was shooting down the hall.

They then decided to use the bomb’s squad robot that use normally used to stop bombs from detonating to bring an explosive to kill the sniper. The documentary mentions that this contradicts Isaac Asimov’s Second Law of robotics which is that a robot must follow the orders of a human unless it contradicts with the first law which was that a robot can’t kill a human.

I’m pretty sure that the term “robot” is being used very loosely throughout the documentary. Can the robot think for itself? I think that this was used to provoke fear into the audience that a person was killed in U.S.

Then, the documentary shifts into armed robots who can scan human faces using a database of terrorists and kill the person. It isn’t mentioned if this is hypothetical or that it’s already being used for war purposes.

A man’s relationship between him and a robot is shown. During work there is he says that “[he] spends most of the day improving [his] girlfriend.” What does that mean? Does he spend the day on her appearance? Does he make her say what he wants her to say or can she think for herself?

The documentary only shows him putting different faces and talking to a screen which seems that the robot is talking to him from, so the documentary gives out no clear answers. The documentary seems to be very vague at times. For example, what does the documentary define as robots?

The documentary seems to cover one side of the debate regarding artificial intelligence mostly by giving several examples during each part of why humans should fear AIs. Now we should switch to the other side of the debate. I decided to read an online book about AIs and public policies.

Artificial Intelligence and Public Policy

The online book starts out by highlighting issues regarding policymakers and AIs, and in a few pages in, under a heading labelled Applications, it starts giving examples of what Artificial Intelligence isn’t. “Systems that performed well at only very specific tasks were dismissed and not considered AI, but as late as 2007 the Economist lamented that many investors still associated the term artificial intelligence with failure and underperformance.” The Volkswagen incident is used as proof that “robots” are out to get us, but the stationary machine was just doing what it was supposed to do.

When humans call you an AI

This makes sense since a TIME article around the time of the incident claim this as well. “A spokesperson for the car company told the Associated Press that the robot can be programmed for specific tasks and that the company believes the malfunction was due to human error.”

Returning back to the online book, they also bring up the idea of deep learning which was mentioning early in a list of terms as “a set of specific methods and algorithms. Many neural networks are deep learning systems; there are multiple steps taken between input and output during which “neurons” interact.” A machine which interacts with itself to understand the input and the outputs? Well they mention that search engines that we use are machines that use deep learning. Then, they also give other examples of how AIs are helping people today like those with disabilities and used to predict forecasts.

The next section is called “REGULATORY THREAT TO AI INNOVATION” in caps to show that this is a new section and that this is where they (the multiple authors) while mention the counter arguments and mentioning the fears that humans have of AIs taking over the world and bringing the end to humans which is what the documentary feeds the audience with: fear of the unknown.

When talking about AIs taking over the job force, they respond and in big letters on the side of the page, “We cannot predict with high certainty what labor markets will look like in a decade or two. At present, many jobholders are unprepared for the possible automation of their livelihoods.” I thought that this was just a simple response to getting around the question of AIs taking over jobs, but at the same time, I thought about the time where humans never would of thought that computers would be the future.

The final two sections are “THE CONSTRUCTIVE PATH FORWARD: COLLABORATION, NOT CONTROL” and “CONCLUSION”. The authors emphasize that only by getting rid of laws that are used to limit AI we’ll be able move towards the future without the stops that we have currently due to people’s fears.

I think that my point of view has been made clear throughout this paper which is that I don’t think that AIs are a threat. I believe that they’ll grow besides us. I believe that if we are not scared, we’ll ascend more forward like how humans did once before in a technological boom.

Source: Artificial Intelligence on Medium

(Visited 5 times, 1 visits today)
Post a Comment

Newsletter