Blog: If you are a politician and you want to learn about tech and society: watch the Good Wife!


If you are a politician and you want to learn about tech and society: watch the Good Wife!

The CBS lawyer series the Good Wife, build around the fictional lawyer Alicia Florrick, often presents us tech cases with interesting legal and ethical questions. It introduces the general public to the legal and ethical dilemmas that come with artificial intelligent (AI) based decision making.

The other day I watched the episode “Two Girls One Code”. In this episode the facts of the matter were as follows.

Chumhum is an imaginary search engine, let’s say the equivalent of Google. As with Google, Chumhum makes rankings when presenting search results. All of a sudden Chumhum ranked Motions, a software voice recognition company, from place one to a much lower ranking. Such in favor of another company. Motions argued that they suffered damages, arguing that if a search engine ignores you, you actually do not exist.

Chumhum denied that they willingly changed the algorithm to the detriment of Motions. Mr Gross (the owner of Chumhum), however, couldn’t exactly explain to the judge (an older but very tech savvy judge) how the search algorithm works.

Motions demanded to make the search algorithm transparent.

AI principles and guidelines

AI is a collective term for technologies that have a (rudimentary) form of cognition and with which the makers have tried to imitate a part of the human brain. Technology that makes use of machine learning is a form of AI that is able to develop itself through analysis of data and thus to make decisions that are not or not fully programmed by man.

AI and machine learning are powerful concepts. They are able to perform complex tasks, process data in a very speedy and efficient way and help organizations in their decision making. They become more and more embedded in the public domain, where they have a tremendous impact on everyday life.

The past years the AI debate has shifted to building trustworthy AI. Formulating principles and considerations on trustworthy AI are part of this debate.

With data being ubiquitous, humans becoming smarter in programming and computational power increasing, humans are more prone to be subjected to automated decision making. These developments give rise to all kinds of questions. How can we safeguard human freedom, fair competition, autonomy and social values? But also, how do we guarantee freedom of expression.

How do we know what the algorithm does what is does?

The last years we see a plethora on principles and guidelines on trustworthy AI. These principles and guidelines are intended to give more direction to and insight in the development of AI.

To mention some (for a more complete overview click here):

· The IEEE (the Institute of Electrical and Electronics Engineers), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity, has launched Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems.

· The UN Human Rights Council endorsed the “Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protection Framework” in its resolution 17/4 of 16 June 2011. These guidelines provide a global standard for preventing and addressing the risk of adverse impacts on human rights linked to business activity.

· Amsterdam, Barcelona and New York City formally launched the Cities Coalition for Digital Rights, a joint initiative to promote and track progress in protecting residents’ and visitors’ digital rights.

· On 8 april 2019 the European High-Level Expert Group on Artificial Intelligence published the Ethics guidelines for trustworthy AI. The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. These guidelines frame human’s fundamental rights as core to any AI system.

The most recent EU guidelines are grounded in the idea that AI should both respect rights and be robust enough to avoid unintentional harm. The EU Guidelines acknowledge however that tensions may arise when balancing human rights and other principles. These guidelines leave an open blanket on the question how to decide whether AI is good to society.

More-over, AI applications vary greatly in their scale, the types of data they use and the potential impacts they have on society. This is why, when talking about AI and principles, we need a more meticulous and case-dependent approach when ethically assessing AI (see also an interview with Aimee van Wynsberghe in Forbes).

As we saw above, AI is about automated decision making that impacts the lives and freedoms of individuals. There is also another dimension at stake: the freedom of expression of algorithmic generated internet content.

The why-did-you-do-that-button

The quantity of guidelines on trustworthy AI suggests that AI develops within a legal vacuum. This of course is not true. In Europa the GDPR gives some directions when it comes to automated (that is: based on algorithms) decision making.

The GDPR restricts automated decision-making because automated decision-making processes put individuals’ rights and freedoms at risk. Thus, the GDPR provides for a general prohibition of automated decision-making, including profiling, which produces legal effects concerning the data subject or is similarly significantly affecting it. This prohibition only applies if a decision regarding the data subject is taken by automated means without any human assessing the content of said decision.

If processing activities based on automated decision-making are permissible, they shall be subject to suitable safeguards to protect the data subject’s rights and freedoms and legitimate interests, including specific information to the data subject. The controller should use appropriate mathematical or statistical procedures for profiling, implement technical and organizational measures, secure personal data in a manner that takes account of the potential risks for data subjects and that prevents, inter alia, discriminatory effects on individuals.

As there usually is a considerable imbalance of information between the controller and the data subject, additional information for the data subject must be deemed generally necessary in order to create an information level playing field.

This GDPR applies only to the processing of personal data wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system. This means that in the event no personal data is involved, the GDPR is not applicable. In these cases, other legal remedies against wrongful or harmful automated decisions should be invoked (eg competition law, law on breach of contract etc).

The trust factor

In most AI guidelines on trustworthy AI, there is a general consent that, at least to some extent, the algorithm and its underlying data and logic should be disclosed or at least explained, especially when the algorithm is likely to have impact on the rights of citizens or the economical position of companies.

Stated simply, transparent AI is one in which it is possible to discover how and why the system made a particular decision, or in the case of a robot, acted the way it did.

In line with the principle of transparency, there is the principle of explicability. This means that the capabilities and purpose of AI systems are openly communicated, and decisions — to the extent possible –explainable to those directly and indirectly affected (the IEEE suggests “. . . a why-did-you-do-that button which, when pressed, causes the robot to explain the action it just took ”). Transparency in AI requires explanation by placing decisions in a broader context and by classifying them along moral values.

Transparency and explicability are often used in the context of creating algorithmic trust and opening “the black box”. Fenwick and Vermeulen (2019) argue that, from a regulatory point of view, governments should engage in the innovation ecosystem and set rules that enable innovation. Government is one of the key stakeholders who are able to establish the rules of the “trust game.” Such rules must relate to transparency, disclosure, and having an open dialogue with the market.

Cathy O’Neil, author of the book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy writes, “ Data is not going away. Nor are computers — much less mathematics. Predictive models are, increasingly, the tools we will be relying on to run our institutions, deploy our resources, and manage our lives (…)”. Algorithmic based decision models “must also deliver transparency, disclosing the input data they’re using as well as the results of their targeting.”

But there is also opposition to algorithmic transparency and explicability. New and Castro (2018) point out that there are several flaws to algorithmic transparency and explicability. They argue that principles on transparency and explicability:

· hold algorithmic decisions to a standard that does not exist for human decisions

· incentivize organizations to not use algorithms, thus sacrificing productivity

· fail to address the root cause of potential harms

· assume the public and regulators could interpret source code for complex algorithms even developers themselves cannot always understand

· undermine closed-source software, reducing incentives for innovation

· make it easy for bad actors to “game the system”

· create incentives for the use of less-effective AI, as there can be trade-offs between explicability and accuracy for complex AI

Search engines

Search engines provide an important infrastructure for politics, humans and businesses. Search engines have the power to build an online presence and reputation on the web. They are extremely important in contemporary innovation and creating new communities. They literally govern online life.

They also have the power to not only track and monitor one’s online behavior. They generate reputations based on algorithms and data. This implies that a search engine can slow down the speed or reduce the ranking of a website in ways that are very hard for users to detect.

And this was exactly the case in the Good Wife’s imaginary but so real Chumhum vs Motions case: the online presence of Motions was adversely affected by the search algorithm of Chumhum without the possibility for Motions to detect what exactly caused the lower ranking.

Chumhum didn’t provide a why-did-you-do-button with its search algorithm. They basically had two arguments:

· the algorithm is company confidential information

· the results of the search engine are protected by freedom of expression

Transparency

The first argument relates to transparency versus company confidentiality. As New and Castro (2018) argued, algorithmic transparency could undermine closed-source software, reducing incentives for innovation.

In an open market competition, however, we presume that an economic optimal situation is induced by forces of supply and demand. The level of information on the supply and demand side is symmetric. There is full transparency. Yet, when one market party has more information than the other a market failure occurs due to the deficit of market transparency. In the imaginary case of Chumhum vs Motion this market failure happened due to the fact that it was unclear why the search ranking of Motion changed.

According to Pasquale (2010) as a consequence, law now must concern itself not only with information’s accumulation or flow, but with what results from that information: the rankings, recommendations, or ratings derived from it. Pasquale argues that, to avoid such self-reinforcing cycles of advantage, search engines’ ranking practices should be transparent to some entity capable of detecting both the illicit commodification of prominence and privacy-eroding practices engaged in by these intermediaries (“qualified transparency”).

Transparency is not a one-size-fits-all solution. The degree to which transparency and explicability is needed is therefore highly dependent on the stakeholders involved, the context of the data processing and type of AI used. There are several stakeholders who have an interest in transparent AI (or the protection of company confidential information).

For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why. More-over, for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology. However, it wouldn’t make sense to fully open up a machine learning network architecture. For the end user this won’t add any value and may lead to an adverse effect of trust. An algorithm to the end user, an explanation on how the algorithm came to its decision is necessary. This explanation can have the form of an instruction leaflet like the ones that come with medicines.

For validation and certification of an AI, transparency is important because it exposes the system’s processes for scrutiny. For validation purposes transparency comes with technical information on the data, feature selection, the train-test-validation set and cross-validation techniques, the density of e.g. a neural network, the reward scheme, dimensionality of the machine learning application etc. This information relates to the technical choices made by the developer of the algorithm in order to assess mathematical accuracy, robustness, performance on new datasets etc.

For lawyers, investigators and judges in legal proceedings transparency is needed to find proof and establish the facts of the legal proceedings. Full transparency, however, could potentially harm the company’s competitive advantage or innovation. Especially when a legal proceeding is nothing more than a fishing expedition. Qualified transparency could be a solution to overcome this objection.

Freedom of expression

The second argument, the freedom of expression argument, is an interesting argument. It is loosely resembling for example the Browne v Avvo case (for other comparable discussions and cases see Volokh, 2012). John Henry Browne and Alan J. Wenokur claim that Avvo’s website, on which information about attorneys and a comparative rating system appears, violates the law. Avvo argued amongst others that the rating system and the re-publication of public records are protected by freedom of expression.

The judge ruled in favour of Avvo. It ruled that to the extent that Browne c.s. seek to prevent the dissemination of opinions regarding attorneys and judges the freedom of speech argument precludes their cause of action.

Many complaints about Avvo revolve around the attorney’s inability to control what is on the profile. Avvo creates a profile for each attorney whether the attorney wants one or not. It will not delete an attorney’s profile even if the attorney files a lawsuit demanding that it be removed.

The freedom of expression argument is an exciting argument. Search engines deserve roughly the same amount of freedom of expression protection as “normal” newspapers. Although being it remarkable that an algorithmic produced “commercial” ranking is considered to have roughly the same level of protection as a human non-commercial expression. Like Jared Schroeder (2018) said: “Google is not a newspaper and algorithms are not human editors.”

On the other side of the divide, the automation process increases the value of the speech to readers beyond what purely manual decision making can provide. It can process information more efficiently than human editors. Search engines exercise editorial judgment about what constitutes useful information and impart that information to the users. Therefore, they are shielded by freedom of expression (Volokh, 2012).

The problematic side of this argument, however, is that there is something deeply troubling about unaccountable power about a system that can spit out some life-changing result without giving any explanation for it (Pasquale, 2008). We must recognize that the opinion of an algorithm is not at the same level as that of a human opinion. We shouldn’t let algorithms have too much power over free speech without appropriate checks-and-balances. Search engines should be accountable for harmful content.

One idea is for third-party bodies to set standards governing the distribution of harmful content and measure companies against those standards. Regulation could set baselines. A chilling effect, however, must be avoided. Regulation must not discourage people or companies from expressing their opinions. Nor must regulation impede innovation.

Politician: start watching the Good Wife

The question therefore should be who has the right to take AI based decisions and by virtue of what kind of democratic process is this right given to those who take AI based decisions.

When answering this question, we must take into account several interests like societal values, moral and ethical considerations, economic, legal and technical considerations. We must weigh the respective priorities of values held by different stakeholders and explain the reasoning. In democratic societies this is typically something that’s is done in the political arena.

A relating challenge is what kind of private governance will we have or allow. How will governments employ new school techniques and how will they attempt to coopt search engine owners (and how will search engine owners respond) (Balkin 2018).

This requires at any rate that politicians, in order to become a serious conversation partner, must become more tech savvy than they are now to remain relevant in a digital world (Fenwick, Vermeulen, 2019). So start watching the Good Wife.

— —

Frank Pasquale, Asterik Revisited: Debating a Right of Reply on Search Results, Journal of Business and Technology Law, 2008

Frank Pasquale, Beyond Innovation and Competition: the need for qualified transparency, Northwestern University School of Law Printed in U.S.A. Northwestern University Law Review, 2010

Eugene Volokh, Google, First Amendment Protection For Search Engine Results, April 20, 2012

Joshua New and Daniel Castro, How Governments can Foster Algorithmic Accountability, Center For Data Innovation, 2018

Jared Schroeder, Press protections might safeguard Google’s algorithms, even from Trump, Columbia Journalism Review, September 6, 2018

Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, 2018

Mark Fenwick and Erik P.M. Vermeulen, It Is Time for Regulators to Open the ‘Black Box’ of Technology, But We Should First Start with Reforming Education, April 2019

Source: Artificial Intelligence on Medium

(Visited 1 times, 1 visits today)

Leave a comment

Your email address will not be published. Required fields are marked *