Blog: Analysis of three AI case studies
: The AI ethics within different policies
This post is written for Dr. Darakhshan Mir’s class on Computing and Society at Bucknell University. We discuss problems in advanced technology and analyze them using ethical frameworks. Yash Mittal and I worked together on this project.
Artificial Intelligence (AI) holds great economic, social, environmental, and even financial promise, and AI systems have potential to help people acquire new skills and stimulate faster and more efficient production. Thus, AI researchers are enthusiastic regarding the capacity of AI for helping manage some of the world’s hardest problems and improving the quality of human lives.
However, in order to truly grasp the potential and importance of AI, the challenges linked with AI development should be addressed. Due to the AI’s fast-paced development, our society encounters difficulties with how to approach the benefits and risks of AI, especially in terms of privacy, autonomy, and transparency. While some countries, like the EU, have a very strict code of conduct for AI regulations, other countries, like the US, focus on a free market-oriented approach on AI and promote public R&D in the AI industry. This post analyzes three different case studies with two separate perspectives on AI development and provides a list of divergent ethical challenges provoked by the different policies.
Different AI Policies in the EU and the USA
In April 2018, the EU published the Communication on Artificial Intelligence: a 20-page document that lays out the EU’s approach to AI. In the report, the EU focuses on increasing its technological and industrial capacity of AI and ensuring an appropriate ethical and legal framework for AI development. As previously stated, the EU has a very strict code of conduct for AI regulations on safety, privacy, and data governance. Most of the requirements are straightforward and could be shaped to any future legislation. The EU has repeatedly said that they want to be a leader in ethical AI, and it has shown with The General Data Protection Regulation (GDPR) that it is willing to create far-reaching laws that protect digital rights.
During the final months of Barack Obama’s presidency, the White House laid the foundation for a US strategy in three separate reports and proposed a fundamental strategic plan for AI development. The report made specific recommendations related to AI regulations, public R&D, automation, and ethics. However, president Trump’s White House has taken a markedly different approach. Unlike the EU, the current US government does not have an organized national strategy to increment AI investment or respond to the societal challenges of AI. In May 2018, the White House announced the four goals regarding AI development: (1) maintain American leadership in AI, (2) support the American worker, (3) promote public R&D; and (4) remove innovation barriers.
Cath et al. (2018) analyzes the three reports released by the US, the EU and the UK. The paper concluded that the US report during Barack Obama’s administration had an elaborate R&D strategy and did an excellent job in including the work of experts and the public through the public workshops. The EU report suggests several recommendations for legislation, reflecting ‘less light touch’ approach to governance of AI and robotics. The UK report calls both for the development of novel regulatory frameworks and for relying on existing regulation like the GDPR. Even though each report specifies the role and responsibility of government, the reports have very different ways of defining what specific values should guide the development of AI. The US report focuses on the “public good” and “fairness and safety” as guiding principles; the EU report calls for “intrinsically European and humanistic values” to ground rules of robotics and AI; and the UK report emphasizes the importance of examining the social, ethical, and legal implications of recent and potential developments in AI.
Should government know the extent of privacy problems before they legislate, or is privacy important enough to initiate regulation before the problems are clear? Strauss et al. (2002) examines how the EU and the US have addressed this issue in different ways. For example, the rationale for regulation differs in each country. The US institutions and culture generally favor commercial interest except when national security issues come into play, while the EU institutions generally are less friendly to commercial interests, but at the same time, less likely to let national security limit potential commercial benefits.
Heiweil, Rebecca (2018) evaluates the EU’s proposal for GDPR and compares and contrasts the extent of right-to-be-forgotten in two regions: the US and the EU. In 2014, European Court of Justice ruling determined that Europeans had a “right to delist”, meaning that individuals, corporations and even government officials could request that material be removed from the search engine results, if the information is considered “inaccurate, inadequate, irrelevant or excessive”. Since Google’s European right-to-be-forgotten became legalized, the company has delisted 43% of 2.4 million URL removal requests, and 90% of those filing requests were private individuals. However, while almost 88% of Americans support this right-to-be-forgotten, the prospects of similar legislation in the US are not clear since the right-to-be-forgotten is not currently under the law. In 2014, the consumer watchdog, a progressive non-profit organization, wrote to Google, arguing that “Google is clearly making the right-to-be-forgotten work for its users in Europe, but that is because you must under the law. We call on you to voluntarily offer the same right to Google users in United States”.
— Case Study 1 : Right-to-be-forgotten
A CEO, who holds dual citizenship in the EU and the US, frequently travels between her workplace (US) and her home (EU). Recently, some rumors about her personal life have surfaced on the internet, and this is severely affecting the reputation of her company. She is hoping to find a way to take the rumors down, but she is constrained by the policies of the two places she resides in.
There exists three types of stakeholders: the user who is willing to remove his/her information on the search engine, the search engine company, and other search engine users. Each of the stakeholders has different stakes. The search engine company does not want to lose the data, and the other users might not want the information to be deleted for the sake of transparency. The conflicting ethical challenges here are the user’s autonomy and transparency. If the user’s data can be deleted upon request, the user’s autonomy is preserved; on the other hand, transparency is not conserved for other search engine users. In this case, the ethical best-case scenario is deleting only the inaccurate and unreliable information; however, in reality, it might be hard to track down which pieces of information are inaccurate.
The EU took very serious legislative action on right-to-be-forgotten, and all of the EU citizens have the legal right to request that web engines, like Google, remove the personal information from its search results. As stated before, this can severely affect the AI industry, especially startups, since data sets are integral for the initial training and fine-tuning of AI algorithms. Moreover, concerns related with transparency might arise as well: what happen if the criminals are allowed to remove their criminal history online?
Since the US promotes public transparency, users’ data might not get easily delisted in the US. However, this can also cause certain ethical issues, such as infringing on data autonomy. Autonomy refers to one’s ability to govern or steer the course of one’s own life , and if the data owner cannot remove his/her data, it encroaches on the autonomy of the data owner. Moreover, if the false information gets distributed online and the user cannot remove the information, the user’s moral rights can be violated as well.
— Case study 2 : Monopoly and Oligopoly
A start-up company X in Washington wants to join the e-commerce industry. Recently, X recruited a lot of smart computer science graduates from the University of Washington and built a novel pair-matching algorithm between consumers and products. When X did an anonymous survey of their product, most of the participants preferred the product of company X over the product of company Y, who is dominating the e-commerce industry. However, due to a lack of data and consumers, company X has trouble flourishing.
In the EU, due to the strict regulations on the market and preference on the fair competition, government will take legislative actions to favor start-ups, such as allowing the start-ups to conditionally access the user data owned by established companies. However, this can cause some ethical issues in terms of data storage and privacy . For example, before sharing the user data, established companies should make sure that the users have clear and accurate information about the terms of storage and who is responsible for the data stewardship.
Since the US has less regulation on the market and adopted a free market-oriented approach on AI, companies who are dominating the market, like company Y, will raise more capital, and start-up companies, like company X, will struggle to survive. Since AI relies heavily on “direct network effect”, where the value to a customer increases depending on the number of other customers using the same platform, digital assets can be easily concentrated among a few dominant AI companies. This can cause two main ethical problems: oligopoly and monopoly .
— Case Study 3 : Generative Adversarial Network (GAN)
A company based in Silicon Valley is known to invest heavily in AI research. Recently, they came up with a state-of-the art GAN which is able to generate hyper-realistic human faces. The program can come up with a human face of a specific sex, race, and age.
The EU poses very rigorous regulations on the AI technology if it can potentially cause severe ethical issues, like forging an identity . Thus, it will take a substantial amount of time for GAN to be adopted; during that time, real faces will be used to train the face recognition technology (FRT). However, as explained in Martinez-Martin, Nicole (2019), FRT raises some profound ethical concerns. First, bias: when images used to train software are not drawn from a pool that is sufficiently racially diverse, the system may produce racially biased results. Second, privacy: FRT systems can store data as a complete facial image or as a facial template, which can be considered personally identifiable information. If the information is not properly stored or protected, it can be used in many public and private spaces without the owner’s consent.
Since the US focuses on public R&D and promotes AI development, GAN can be quickly developed and easily adopted in US society. However, it is easily imaginable how GAN can be used to help criminals create scams, fraud and fake news. For example, criminals can easily generate images to create fake news/information by using minimal pictures of the victim. Potentially, this causes severe damages to data hygiene and relevance. GAN will make it challenging to discern inaccurate and unreliable data from the “clean” data; therefore the data is unlikely to remain accurate or trustworthy.
AI has the potential to rapidly and dramatically affect society; therefore appropriate legal platform should be established to properly deal with AI. However, different policies provoke different types of ethical concerns. For example, in case study 1, if right-to-be-forgotten is implemented, it can violate transparency; otherwise, autonomy can be breached. Of course, we cannot always be completely transparent about everything we do with data. Company interests, intellectual property rights, and privacy concerns of other parties often require that we balance transparency with other legitimate goods and interests. Likewise, the autonomy of users will sometimes conflict with obligations to prevent harmful misuse of data. However, balancing transparency and autonomy with ethical values is not the same as sacrificing the values or ignoring their decisive role in preserving public trust in data-driven practices and organizations.
There is no perfect answer for AI policy–we need to find a way to balance the different consequences of each policy.
 Future of Life Institute. “Global AI Policy: how countries and organizations around the world are approaching the benefits and risks of AI”.
 Dutton, Tim. “An Overview of National AI Strategies” (June 2018).
 Vincent, James. “AI systems should be accountable, explainable, and unbiased, says EU” (April 2019).
 Aaron Smith and Janna Anderson, “AI, Robotics, and the Future of Jobs,” Pew Research Center (August 2014).
 Jason Furman and Robert Seamans, “AI and the Economy,” Innovation Policy and the Economy 19 (2019).
 Vallor et al. “An Introduction to Data Ethics” (2018).
 Cath et al. “Artificial intelligence and the ‘good society’: the US, EU, and UK approach.” Science and engineering ethics (2018).
 Mantelero, Alessandro. “The EU Proposal for a General Data Protection Regulation and the Roots of the ‘Right to Be Forgotten” (June, 2013).
 Heiweil, Rebecca. “How Close Is An American Right-To-Be-Forgotten?” (May 2018).
 Strauss et al. “Policies for online privacy in the United States and the European Union” (May 2002).
 Martinez-Martin, Nicole. “What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care?” (Feb 2019).
 Dickson, Ben. “What is GAN, the AI technique that makes computers creative?” (May 2018).
 European Union. “General Data Protection Regulation.” (May 2018).