ProjectBlog: Government Regulation of Artificial Intelligence

Blog: Government Regulation of Artificial Intelligence

The rise of artificial intelligence represents one of the most powerful forces ever to change our current technological and economic systems. This is because of the tremendous potential for AI systems to make work processes more efficient and lead to more automation of previously human labor processes. The fundamental question that must be answered is whether this intelligence needs to be regulated by government authorities. A review of technology leaders would be productive such as Elon Musk who gave the following quote (Scherer 355):

I think we should be very careful about artificial intelligence.

If I had to guess at what our greatest existential threat is, it’s

probably that …. I’m increasingly inclined to think that there

should be some regulatory oversight, maybe at the national

and international level, just to make sure that we don’t do

something very foolish.

We are rapidly progressing towards a society that will be AI based and so we need to be sure that we maintain control so that we avoid nightmare scenarios often depicted in dystopian Hollywood movies such as the Terminator series. There needs to be oversight from a government agency making sure that there are built in fail safe controls to systems that they can be taken offline if problems arise such as threats to human beings or government systems. One possible scenario could be AI seizing control of Social Security or other vital government systems and then huge portions of the population unable to access their benefits. The government needs to come up with a framework in which to regulate AI and also hold their creators responsible for any issues that may occur.

The most serious threat posed by AI is referred to as the Global Control Problem, or what happens when no humans in the world can control the AI because of its superior intelligence (Danaher). That is what system can be put in place to make sure that there is always human control so that this scenario never happens. One realistic approach is to have a back end shutoff option where a human can shut down the system if the AI shows threatening signs of seizing control. Another possible solution could be to require physical keys that are needed to be inserted into the computer for the AI to function and can be removed to shut down the system. The real difficulty that may be encountered is when dealing with large multinational companies who have locations all over the globe and employees located over a large area. It can be hard to enforce oversight over such a large area and make sure that employees follow the proper procedures. Another possible future issue maybe once consumers start to use Artificial Intelligence systems in their own homes to control such things as home security, lights, etc. There will need to be some way for the government to have a system in place to deal with such issues because they may occur on a daily or hourly basis once they start being implemented on a consumer level. Another area that needs to deal with the threat is the legal industry, it must answer questions as to who is responsible if AI overtakes a system and causes widespread damage. The courts must also be prepared for lawsuits over the issue of artificial intelligence and damages to consumers and businesses.

Works Cited

Scherer, John. Harvard Journal of Law & Technology, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies, Volume 29, Number 2 Spring

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.