Blog

ProjectBlog: Artificial Intelligence & Human Rights — March 2019

Blog: Artificial Intelligence & Human Rights — March 2019


Below is a list of interesting articles on the intersections between Artificial Intelligence tools (Computer Vision, Machine Learning, Natural Language Processing, etc.) and International Human Rights that I put together.

Articles were mostly published in March 2019. Don’t hesitate to share and/or get in touch with me on Twitter @ImaneBello

  • Medical AI

Nick Carne, Researchers warn medical AI is vulnerable to attack, March 22, 2019

In an article on Cosmos Magazine, Nick Carne discusses the recently published study “Adversarial attacks on medical machine learning” (March 22, 2019, Journal Science). The research team led by Samuel G. Finlayson from Harvard Medical School points to the possibility of adversarial attacks to e.g. commit fraud (produce false medical claims, generate false medical diagnosis etc.) by changing the input data for medical machine learning systems. While major attacks have not been identified in the healthcare sector, the study raises awareness of the risks and calls for an interdisciplinary policy-making approach to address these issues.

  • Rule of law and governance frameworks

Ashley Deeks, Detaining by algorithm, March 25, 2019

As part of the ICRC’s AI blog series, Ashley Deek discusses the potential use of predictive algorithms by the (U.S.) military, e.g. to assess the possibility and location of attacks, the extent to which actors might be dangerous for the sake of detention etc. Building on a previous article, the author adds a number of considerations that should be kept in mind when either ‘importing’ predictive algorithms from the (U.S.) criminal justice system or, in general, when using predictive algorithms in a military setting.

As shown by the Executive Order on AI and the U.S. Defense Department’s ‘Artificial Intelligence Strategy’ in February 2019, militaries might be looking into using algorithms for various future applications. According to the author, this underlines the necessity of addressing issues already emerging from predictive algorithms and other considerations.

Lorna McGregor, The need for clear governance frameworks on predictive algorithms in military settings, March 28, 2019

In an accompanying article on the ICRC’s AI blog series, Lorna McGregor further elaborates on the potential use of predictive algorithms in military settings, pondering on human rights issues in connection with such AI-based tools and calling for the development of clear governance frameworks for such applications.

John Villasenor and Virginia Foggo, Algorithms and sentencing: What does due process require?, March 21 2019

John Villasenor and Virginia Foggo ponder on the use of risk assessment tools that are based on algorithms and used in criminal proceedings. In particular, the authors point to the implications these tools have on the constitutional right to (procedural) due process, notably in the forms of offenders’ rights to a) challenge accuracy and relevance of information used and b) have an insight into the results/scores of the algorithm.

Given the increasing frequency of these risk assessment tools, several constitutional, policy and technology related questions will need to be addressed.

Eric Nieler, Can AI be a fair judge in court? Estonia thinks so, March 25, 2019

Eric Nieler discusses the Estonian Goverment’s push to include AI systems in its various ministries. One of the examples is the undergoing effort to conceptualize a “robot judge” to decide small claims disputes (less than 7,000€) so as to enable human judges to deal with a backlog of cases. The project will likely start in late 2019 and people could be appeal to a human judge. Estonia, advanced in its use of national ID cards and a vast online service system, might therefore be the first country to attribute decision-making authority to an AI system.

  • Facial recognition

Os Keyes, Nikki Stevens and Jacqueline Wernimont, The Government Is Using the Most Vulnerable People to Test Facial Recognition Software, March 17, 2019

The article discusses the challenges and methods used to test facial recognition software. According to authors’ research, U.S. government, researchers and corporations have used images from vulnerable groups, including abused children, immigrants and deceased people, to test systems. The article also discusses the role of the National Institute of Standards and Technology, part of the U.S. Department of Commerce, that holds the standard test evaluating facial recognition technology.

  • Surveillance

George Joseph, Inside the Surveillance Program IBM Built for Rodrigo Duterte, March 20 2019

George Joseph analyses how IBM sold surveillance technology to Duterte’s administration in Davao City in 2012, despite alleged police complicity in death squads against alleged criminals. It is unsure whether IBM conducted human rights due diligence before selling the technology. According to IBM documents, its video surveillance system was used to counter crime. As president, Duterte now seeks to broaden surveillance, potentially with the support of Chinese firm Huawei.

  • Content Moderation & Facebook

Beheadings, Suicide Attempts, Porn: Why Facebook Moderators In India Are Traumatised, March 1, 2019

This Reuters article about Facebook’s content moderation, discusses its outsourcing policy (at least five outsourcing vendors in at least eight countries) and impacts on employees. Interviews conducted for this article show to what extent the work can be distressing and provide an insight into the organisation of online content moderation.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social