a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Do You Oppose Bad Technology, or Democracy?

Blog: Do You Oppose Bad Technology, or Democracy?


Calls to Limit the Use of Bad Technologies Only by Law Enforcement and Governments, Largely Via “Ethics” and Self-Regulation, Exacerbate Rather than Ameliorate the Anti-Democratic Harms of Digital Technology

Recently, more of us have started to realize just how destructive digital technologies can be. That’s good. As someone who has been nearly screaming about the topic for over two decades now, I can only say that it’s about time.

Yet one of the most prominent strains of this criticism is one that we should be almost as concerned about. Among other things, it is a big part of what got us here in the first place.

This line of argument says that the solution to a technology being deeply destructive is to prohibit governments, and only governments, from using it.

Not just “governments,” of course, but inherently, democratic governments, since authoritarian governments aren’t going to take the work of activists and critics seriously to begin with.

Only in the digital age, as far as I know, has such a perspective even been mooted as reasonable.

Rather than being a fringe perspective, it’s a core part of the most prominent ideology associated with digital technology. The ideology, called by scholars cyberlibertarianism or the Californian Ideology, combines fundamentally right-wing political assumptions, including opposition to government, with, in the words of philosopher of technology Langdon Winner, “ecstatic enthusiasm for electronically mediated forms of living.”

WIthout cyberlibertarianism, it is difficult if not impossible to understand the arguments some technology critics are recommending.

The latest and most pervasive example of this argument is found regarding facial recognition technology (FR), especially FR fueled by machine learning technology; related concerns have come up regarding autonomous vehicles, among other technologies. Critics have rightly pointed out that FR is discriminatory, perhaps unavoidably so, because there is no non-discriminatory world, let alone a non-discriminatory data set, on which these technologies can be “trained” so as not to replicate that discrimination.

Some critics have gone further and argued that FR, along with a lot of other technologies, is so invasive toward aspects of the social world that, even if it could be made non-discriminatory, it would still be unacceptable. To be clear, this is my view. The technology itself is too destructive to be deployed, at least not with very serious regulation and limitations of its deployment.

Yet by far the most widespread form of criticism about FR and associated technologies is that they should be banned for use by governments. Full stop.

They don’t say out loud what appears to be the only reasonable way to understand them, especially in the climate in which we live now: governments should not be able to use them, but corporations and individuals should be able to.

This latter clause, although not always emphasized, is often implicit and sometimes explicit in these efforts to compel democracies to block only themselves from using dangerous technologies.

That’s the context in which we have to read the opinion of Brian Brackeen, the “black chief executive of a software company [Kairos] developing facial recognition services” who says that facial recognition is “an amazing technology capable of personalizing experiences, improving interactions and creating positive feelings” but that “In the hands of government surveillance programs and law enforcement agencies, there’s simply no way that face recognition software will be not used to harm citizens.”

It’s the same context in which we should read the call from the right-wing libertarian house organ Reason, for serious oversight of the technology, but in which the “threats” posed by facial recognition are limited to its use by “police” and “governments.”

At best, we are told that companies should be constrained by “ethics boards” and “pledges” — that is, industry self-regulation — the kinds of things that almost never work without actual laws and legal regulation, because companies will always pursue profit over “doing good,” because in many ways the nature of companies demands that.

MIT Media Lab researcher Joy Buolamwini, one of the most visible commentators on this topic, and one of the most vocal proponents of industry self-regulation and (temporary) bans on use by police and governments, writes in Time magazine:

there is still time to shift towards building ethical and inclusive AI systems that respect our human dignity and rights. By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion.

Even more, the “Safe Face Pledge” Buolamwini has developed is explicitly a call for companies to regulate themselves, and to commit to “not facilitate secret and discriminatory government surveillance” and “mitigate law enforcement abuse.” Those sound good until you reflect on what the words “government” and “law enforcement” are doing in those clauses. At a very literal level, there is no other way to read them except to say that these practices are acceptable unless government is doing them.

(In this regard, it is hard to be surprised to learn that MIT Media Lab, for which Buolamwini works, served as an incubator for at least one FR company, a particularly disturbing one named Affectiva that measures human emotion with FR, and which emerges directly out of the Media Lab’s work on “affective computing.”)

This is not just the wrong solution: it is even more dangerous than the present situation. Not only does it leave companies free to decide what constitute “applications that risk human life,” but it suggests that they can do this without law or regulation (words that occur nowhere in most of Buolamwini’s articles or in the Safe Face Pledge). It is on the order of making Google publicly pledge to “don’t be evil.” How can we at this point not realize that these companies use self-regulation to their own advantage, and use anti-government rhetoric to prevent democracies from constraining them?

This is exactly the danger of cyberlibertarianism: rather than directing critique at the thing which is actually harming us, that critique is redirected toward the thing whose job it is to protect us from that harm.

It’s not as if this problem is confined to FR. It’s endemic to the digital world. Two of the most prominent “digital civil rights” organizations, the Electronic Frontier Foundation (EFF) and the Center for Democracy and Technology (CDT) are well-known, at least among the few of us who resist their self-depiction as primarily interested in human rights, for being primarily committed to deregulationism, especially when anyone in the US government pushes back against Section 230 of the Communications Decency Act and suggests that the incredibly deregulated space in which internet companies operate should be contracted. EFF is in many ways the poster child for the danger of cyberlibertarianism: it tells the public — and many in the public believe it — that its main interest is in promoting “privacy.” Yet you don’t have to read very far at all in EFF’s material to see that it construes “privacy” as something that is generally violated only by “governments,” and that while at some level EFF would prefer that companies respect individual privacy, they also vigorously oppose nearly every effort by governments to demand that.

This is why EFF’s opposition to FR seems characteristically focused on its use by law enforcement; why EFF’s senior personnel curiously circumscribe their concerns to “government spying,”

Twitter profile of Jen Lynch from EFF

and why EFF can even turn against a major industry figure like Mark Zuckerberg — bizarrely and in a disturbing echo of populist rhetoric (an echo that is not unusual in EFF’s activism) — when he dares to suggest that regulation is the only remedy to the nightmare hellscape that the digital world has become. Remember, EFF still believes that what we have now is an “internet” that is so precious that its pro-industry campaigns are typically larded with claims that “the internet will break” or “the internet as we know it will end,” as if the internet as we know it — that is, the one in which certain corporate actors are able to act with near-absolute legal impunity — is so precious that we should sacrifice many other obvious and critical human rights to keep it just the way it is. Even when thoughtful journalists, activists, and scholars point out this obvious aspect of EFF’s apparent “activism,” they maintain the same pro-corporate, anti-government position.

While these calls typically focus on law enforcement, they seem not even to acknowledge the structural function of law enforcement in government itself. Government is made of laws; if you deprive government of the ability to enforce law via one mechanism or another, you unavoidably oppose government itself.

No matter how dangerous a given technology is or might become, there is nothing more dangerous to the human fabric right now than to tear apart democratic governance. And this has been a major effect, if not always necessarily a primary goal, of cyberlibertarian ideology. Democracy is under major threat today in a way few of us alive thirty or forty years ago can ever have imagined. Digital technology, and even more so the political ideologies that enable the growth of that technology, has turned out to be central to antidemocratic forces.

Yet even now, some of those who claim to recognize that threat are literally arguing that democracies should respond to it specifically by constraining itself, while not constraining dangerous technology with clearly antidemocratic tendencies.

Zoé Samudzi offers a pretty pointed analysis of the facial recognition debate, arguing that “it is not social progress to make black people equally visible to software that will inevitably be further weaponized against us.” And in the discussion of Google’s abortive effort to create an “AI ethics board” (actually an Advanced Technology External Advisory Council) — that is, to rely on corporate self-regulation to push into areas of technology that even Google recognizes have potentially dire consequences for everyone — MIT Technology Review asked 14 corporate and academic experts what Google should have done instead. Only 3 of the respondents, academics Os Keyes and Anna Lauren Hoffman and writer Adam Greenfield — none of them affiliated with a corporation or a corporate-funded research institute — call for regulation over or an outright ban on the technology. Worryingly, many of the respondents, at least in the excerpts provided in the article, appear to think that industry self-regulation done right would be adequate to addressing the problems with facial recognition and other technologies.

Asking governments to enact legislation that bans only governments from using technologies is at best odd. Without regulatory bodies, governmental agencies are the only entitles over whom governments can exert oversight. It is improbable that many legislative bodies would enact laws that say “we can’t use this tech, but anyone else can.” It’s not even clear what that would mean in practice. Even if the local police or the FBI is prevented from actively deploying facial recognition by its own employees, would it also be prohibited from purchasing or subcontracting those services to a private company that uses them completely legally? And if that were prohibited, would the prohibition extend to the police purchasing the results of facial recognition use, especially if the company — to pick one out of a hat, say Palantir — used terms like “proprietary methods” to black box the services it provides? And if Palantir offers a “suspect identification” service to law enforcement, the work involved in piercing its legal veil of trade secrecy and active legality to show that the police are knowingly purchasing a service that is illegal only for them to use… now, ask about the police using private investigatory services who subcontract to Palantir… while it is conceivable that in some perfect world that all of these loopholes could be plugged, the mechanism to do that would look an awful lot like a regulatory body.

We don’t even have to speculate about things like this happening. Just a few weeks ago, EPIC sued Google for warrantless searching — for doing things a company can do because it’s not constrained by law, and then selling or giving the results to law enforcement. Even if your concerns are exclusively related to what government does with bad technologies — concerns which I urge you to reconsider — nominally barring only governments from directly using those technologies won’t produce the effects you want.

We need democratic control of technology. We do not need democracies to step back even farther from using its powers to constrain technology. Those powers are democracy, in one of the only forms that mean anything.

Source: Artificial Intelligence on Medium

(Visited 8 times, 1 visits today)
Post a Comment

Newsletter