a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Does Facebook really mess with our minds?

Blog: Does Facebook really mess with our minds?


Go to the profile of scritic

Does Facebook really mess with our minds?

Dear Dumbartonists (if I may),

As I was reading chapters 9–11 of Zuboff’s Surveillance Capitalism for our reading group meeting, I was reminded of a conversation I had with one of my undergraduate students. This student majored in Computer Science and STS, so he is both technically accomplished and a nuanced social analyst of technology. We were discussing Facebook, or rather the question of how Facebook should be regulated. It turned out that we had very different diagnoses of where the problem lay. I thought Facebook was too big; it had access to two billion pairs of eyes and when something is that large, it will simply be weaponized for all sorts of ends. Hence it should be regulated. But my student thought differently. He thought Facebook’s business model was immoral through and through, irrespective of how big it was. He thought that any company that was trying to maximize your engagement with it through continuous feedback loops with machine learning algorithms was beyond the pale. To put it differently, he thought that Facebook had the power to mess with our minds; thus, it was unacceptable.

I heard a lot of this student’s voice as I read chapters 9–11 in Zuboff. In Chapter 9, titled “Rendition from the Depths,” Zuboff argues that surveillance capitalism is particularly evil because it turns our deepest desires into profit. As she describes the activities of one Michal Kosinsky and his attempts to calculate personality types from Facebook data, she says:

“That the “personality” insights themselves are banal should not distract us from the fact that the volume and depth of the new surplus supplies enabled by these extraction operations are unprecedented; nothing like this has ever been conceivable.”

From Kosinsky, Zuboff moves to Cambridge Analytica — his illegitimate progeny — and then to Rosalind Picard and “affective computing” (the field of AI that uses machine learning to deduce “emotions” from voice and other data). And her refrain continues: the methods to extract “emotions” may be banal but they’re still unprecedented. At times like this Zuboff’s prose turns downright purple(“the swelling frenzy of institutionalization set into motion by the prediction imperative”)!

As the prediction imperative drives deeper into the self, the value of its
surplus becomes irresistible, and cornering operations escalate. What happens to the right to speak in the first person from and as my self when the swelling frenzy of institutionalization set into motion by the prediction imperative is trained on cornering my sighs, blinks, and utterances on the way to my very thoughts as a means to others’ ends? It is no longer a matter of surveillance capital wringing surplus from what I search, buy, and browse. Surveillance capital wants more than my body’s coordinates in time and space. Now it violates the inner sanctum as machines and their algorithms decide the meaning of my breath and my eyes, my jaw muscles, the hitch in my voice, and the exclamation points that I offered in innocence and hope.

This combination of deep knowledge and behavior modification, says Zuboff, is bad in the worst sense: it restricts human freedom, stops us from exercising our “right to a future tense.” For Zuboff, surveillance capitalism is bad because it distorts our deepest personhood.

This whole mode of argument throws me into a bind. On the one hand, I agree completely that we need to regulate Facebook and Google and the mountains of data they are helping to create. On the other hand, the justification that this is because it’s an unprecedented attempt to shape human subjectivity strikes me as too broad, incorrect, and quite frankly, something that plays into the hands of surveillance capitalists.

In this installment, let me focus on that last part. The whole pitch of Facebook and Google to everyone, including their advertisers, is that they offer the best targeted advertising ever. You want to reach an audience? We’ll find them for you — and hey, we’ll also show you audiences whom you haven’t even imagined. Why? Because we know our audiences deeply, perhaps better than they know themselves.

Zuboff takes this argument which is made by the spokespeople of surveillance capitalism — Michal Kosinsky, Sandy Pentland, and others — at face value. And maybe bringing up the fear of mind control is the best way to get the wider public to buy into the plan to regulate surveillance capitalism. But as it stands, the argument strengthens Facebook and Google rather than weakens them. Cory Doctorow puts it better than me:

And as to tech’s ability to distort our thinking, the idea that this is due to some kind of machine-learning secret sauce is something that Big Tech itself promotes (“Buy our ad-tech and we’ll sell your stuff to people who wouldn’t buy it otherwise”), but I don’t know why we’d conclude that these companies lie about everything except their sales literature. On the other hand, the fact that we have (effectively) one search engine that unilaterally decides what goes on the front page for every search query, and one App Store whose editorial policies decide whether certain political messages can or can’t be shown to Apple users. These have massive, obvious effects on public opinion, and no mind-control is necessary to understand how that works.

There’s a reason we fall into this trap. In his masterpiece Computation and Human Experience (and, for my money, still the best analysis of AI EVER), Phil Agre argues that the work of AI researchers can be described as a series of moves done together, a process that he calls “formalization”: taking a metaphor, often in an intentionalist vocabulary, (e.g. “thinking,” “planning”, “problem-solving,”), attaching some mathematics and machinery to it, and then being able to narrate the working of that machinery in intentional vocabulary. This process of formalization has a slightly schizophrenic character: the mechanism is precise in its mathematical form and imprecise in its lay form; but being able to move fluidly between the precise and the imprecise is the key to its power.

This is exactly what happens when Facebook and Google interpret what they find through their voluminous data analyses. Consider Facebook’s infamous emotional contagion study. All Facebook did there was to change some of the words that appeared in users’ feeds, and then see if that resulted in users using certain words less or more. But it used a certain intentionalist vocabulary (“emotional contagion” i.e. people take on the emotions of their feeds) to narrate the workings of its technical machinery. Naturally people were outraged: how dare Facebook try to manipulate people’s emotions?

The contagion kerfuffle was ultimately a good thing because it brought to the fore questions about researchers’ responsibility to their experimental subjects. What it didn’t do so well was that it forced those who questioned the experiment to adopt the “emotional contagion” vocabulary that Facebook used to frame its study. Well, “forced” is probably the wrong word. Some didn’t care for the experiment because they just thought that Facebook’s result was wrong or that Facebook fudged its results so that the experiment showed it to be more benign than it was. Others probably didn’t care all that much about how the experiment was described and if the mind-control description helped get more people on board to regulate Facebook, then why not?

But it seems important to me — and I can’t quite tell why — that the argument for regulating Facebook be conducted without bringing in the specter of mind control. If the argument is conducted in terms of how Facebook’s algorithms violate our deepest humanity, it just strengthens surveillance capitalism. That’s because — contra Zuboff — surveillance capitalists do have a theory of happiness and freedom and I fear that their theories have more cache with the broader public. But that’s a topic for the next email.

What do you all think?

Best,

Shreeharsh

Source: Artificial Intelligence on Medium

(Visited 9 times, 1 visits today)
Post a Comment

Newsletter