ProjectBlog: “Don’t Worry About the Vase”: How a ’90s Movie Predicted the Future of Media

Blog: “Don’t Worry About the Vase”: How a ’90s Movie Predicted the Future of Media

In light of recent events regarding Facebook, it seems of particular importance to take time in reassessing our current modes and technologies of media. And there is a work of a now more classical variety of media which provides an ideal template on which to employ such analysis.

It has been 20 years and three weeks since the debut of modern sci-fi classic The Matrix. I still remember, in the days leading up to its release, an ad campaign both mysterious and ubiquitous which centered on the promotional website,

It may be as difficult now to convey how successfully fascinating that question once was as to envision or remember the age into which The Matrix was born. But stick with me, and we’ll give it a shot.

As of March ’99, the dot-com bubble was in full expansion. Y2K fears were growing ever more feverishly toward anti-climax. Bill Clinton had just been impeached (and acquitted). Google was in its first year of incorporation. The Euro was introduced as an accounting currency. Columbine would take place in a matter of weeks. And the twin towers were yet to fall.

From a media perspective, box office was king. It was a moment before the re-established victory of television via streaming. A single DVD was yet to sell a million copies (an honor which would go to The Matrix itself). Celine Dion and Marcy Playground were somehow simultaneously topping the charts. And, though plenty of would-be investors were feelin’ (foolheartedly) bullish about the net, the tidal shift of a majority interest in computer technology was still in early days.

It appears, in hindsight, that many of the people surfing to the inevitably primitive must have been doing so in a way which distinctly characterizes that odd but enduring art-reality-mirror quality of the film, paralleling the search for fictional understanding with a quest to apprehend the information superhighway itself.

Though I understand that many have never seen or only vaguely recall the film, I am writing in support of an idea that its release has been — at least symbolically — pivotal in the cultural transformation of the last 20 years.

The Matrix established and/or forecast a new kind of discussion concerning the conditions and possibilities of our experience, one which appears now in popular media as much as in everyday conversation. It also anticipated, with startling accuracy, the conditions of externality-chasing, of our current and perhaps coming data economy.

In other words, the way we query our world and the way we interact with our technology are influenced and oddly predicted by a basically random piece of pop culture.

To slow it down a bit, The Matrix is an American film directed by Lana and Lilly Wachowski (then credited as Laurence and Andrew) which follows Neo, a young computer hacker ambitious to discover the ‘true’ nature of reality, into a world beyond the world where, with the help of new allies — Morpheus, Trinity, and the remaining crew of a subterranean hover ship –, he must cultivate the abilities to navigate these competing planes and battle the forces which strive to maintain humanity’s most captivating illusions.

The film was enthusiastically received and two sequels followed, accompanied by anime, comics, toys, video games, and any number of printed philosophical, religious, and critical interpretations of the work.

For the purposes of this article, we will focus on the first release of the franchise as an originator of themes.

One fascinating aspect of The Matrix’s legacy is that members of emerging generations — if only nominally aware of the film’s existence — use elements of it as devices for ontological inquiry.

When someone yells, “Adrian!” (to the likely chagrin of everyone around them), we understand it as a reference to 1976’s Rocky. Try “Here’s lookin’ at you kid” from Casablanca or “Shane, come back!” from, well, Shane. These phrases have meaning because of their established cinematic contexts. And they function solely as reproductions of that context.

They work exclusively as reenactments of scenes we all know.

However, expressions or concepts from The Matrix retain their meaning free and independent of the source material. “Wake up, Neo” has a common significance even if you haven’t seen the film, and the question of a simulated existence — once relegated to the terrain of mystics and philosophers — is now an accepted and culture-wide discussion.

A similar shift is exhibited in popular entertainment. Beyond shows like Rick and Morty, which work foundationally in the niche which The Matrix has established or codified, an array of brief moments in film and television across genres feature Matrix references, and the trend is further demonstrated in popular music.

The notions of the blue and red pill (not always unfortunate political signifiers), of a glitch in reality, even of the Matrix itself are now free-standing instruments with which we measure and indicate our experience. And they are often applied more readily to the actual world — or valuations of it — than to a fictional one.

So, why does it matter?

Adoption of the phrases and ideas of The Matrix to be used independent of their filmic context suggests utility. We collectively employ these things in a new form, simply enough, because we collectively require them.

By way of an example of comparable adoptions in another time, the popularization of Freudian analysis, of existentialist thought, and of film and literary noir all relate as conceptual responses to real events, namely to World War II. We needed these perspectives to assist in processing an unprecedented and massively influential occurrence: a global conflict centered on the industrialization of violence.

So, what condition, which ‘massive occurrence’ justifies the need for our adoption of these particular once-cinematic indicators? Why is it that we have so enthusiastically taken up The Matrix’s vocabulary?

To answer that question, we have to take a bit of a left turn.

In 2018, a handful of very smart people got together and wrote a paper entitled, “Should We Treat Data as Labor? Moving Beyond Free.” The authors — including, notably, Jaron Lanier of VR fame and E. Glen Weyl, co-writer of the 2018 bestseller, Radical Markets — argue in the work, essentially, these following points:

1) Data is personal property

As a quick and easy definition, data is (informational) measurements which can be shared digitally. Every time we tag a photo, solve a CAPTCHA, like, rate, comment, review, upload, move about the physical world with a GPS-enabled device, use a rideshare, command a digital assistant, speak near our phones, smart TVs, et cetera, or even look into the lenses of our various smart devices, we are producing data.

These are often highly intimate measures of us and our daily activities, and they are hoovered up, sorted, and sold, by and large, without enough understanding of the burgeoning data economy’s relevant motivations, processes, or hierarchies of value for us to properly consent.

In light of current standards which may conceivably be presented as data exploitation, the authors suggest– as many of us already intuit — that data is and ought to be broadly regarded as “user possessions.”

2) Data is valuable

Our data contributes to the production and maintenance of, strangely (but truly), artificial intelligence, a massive and naturally multi-tiered venture representing inordinate amounts of publicly-declared funding (and implying plenty of undeclared investments) which, it just so happens, may be a major contributor in determining the next world power.

This can be a difficult concept when first encountered — partially because we tend to think of AI as freestanding –, but your hours of watching cat videos aren’t as wasted as they may feel when you show up to work the next day.

AI, in most cases, relies on exposure to HUGE amounts of internet-friendly data (such as is produced when staring at YouTube), constantly updated for it to develop, establish requisite pattern recognition, and maintain/advance the relevance of those processes.

To reiterate, cat videos = AI = the next world power.

: )

And that’s just one aspect of the equation. While it’s fueling and optimizing AI, the world’s data is also being mined and aggregated, in part, for the purposes of user identification across platforms in the misleadingly titled “advertising model” of the internet — which is more a giant for-profit study in behaviorism.

As a general (and hypothetical) example, if I can purchase information from Wells Fargo’s mobile platform, Uber, Tinder, and Facebook — just those four –, I could conceivably get a sense, so precise as to be predictive, that George H. in SpecificTown, NJ has corresponding interests, political leanings, psychological tendencies, sexual preferences, annual income, navigational and spending patterns, and font/word color responses to these other 100,000 people about whom I have acquired data.

Based on that, I could potentially sell to the highest bidder a behavioral nudge across that population informed by already measured shifts (It got a little blurry in the overall media exchange, but this is essentially the model that Cambridge Analytica was exploiting, using only one platform). Creepy but, to the point, SUPER profitable.

By the way, is it starting to seem weird that the users whose interactions create all this data do it for free?

3) Data will eliminate traditional jobs

Lanier et al.’s paper cites a University of Oxford study stating that “about 47 percent of total US employment is at risk” of being lost in the coming decades to automation (AI).

The Data as Labor authors go on to suggest that this will only remain a probable scenario if the data production which is largely training and improving that automation (AI) continues to go uncompensated.

And, while some are responding with more passive positions like the Universal Basic Income (UBI), the Data as Laborers alternatively promote “data work as a new source of digital dignity.”

4) Data should be compensated

The concluding point of “Should We Treat Data as Labor?” is that, yes, because data is a possession of the user; because it is immensely valuable; and, because it stands to create the conditions by which its producers will be made redundant, data should be compensated as labor.

This is another difficult consideration to take in at first.

But, we’ve gone post-industrial, and as the concept and practice of work underwent a monumental shift at the onset of the Industrial Age, so now in the Age of Data, we will have to reconceptualize labor once again.

Consider the possibility that if you’ve ever been exhausted after traversing a YouTube rabbit hole, anxious while scrolling through Instagram, or depressed while binging a Netflix series, that it’s because — in part — you are at work. You are laboring in the mine, and no one’s told you how much gold is worth.

In case the equations are helping (and because I kinda like writing ‘em): 
 Data = $$$

While we should probably all be lobbying our local whoever or trying to establish legal precedent for user rights in advance of whatever political shitstorm the Kali Yuga throws at us next, the fine point here is more on identifying the parts (user, data, and data profiteer) and the whole of the data economy — which is now the world economy.

It is to address the new operating system for which we mostly failed to receive an update notification.

Or, trying on another metaphor, in rising from the remnants of the last century, most of us think we’re still playing baseball; we are playing Tron.

And while the objective world may remain the same, there is certainly a new map for it. Many of us perceive this new overlay of the world through what tends to be our most immediate point of access: (the customer-facing facets of) social media.

Let us momentarily pivot/expand on an obstacle to understanding concepts such as the above.

It is a challenge to apprehend the data economy, the facts of social media, or truly cyberspace itself because these are all models based on models. There is no actual foundation from which they derive.

I would argue that here is one of the leading motivations for our borrowing of terms to interrogate the ‘realness’ of reality. We’re all spending hours a day staring into a construct [read: the internet] designed to evoke sentiments and confirmations of the actual world, but it is not ultimately applicable to that world (rather, again, only applicable to a model of it).

And, after all the scrolling, the binging, and the trending, when we reengage with our native experience, it might feel — to use industry parlance — uncanny.

This is all reminiscent of a term coined by Jean Baudrillard (whose work makes a key cameo in The Matrix), hyperreality:

The generation by models of a real without origin or reality.”

A painting is a map of something. A painting of a tiger is a reference to a real-world tiger. A splash of blue may be a reference to a mood, a real-world transmission of neurochemicals and whatever emotive contributions for which the spirit may be accountable.

But, the internet is a painting of a theory — once more, a model of a model.

Appreciating now the difficulty — in terms of both comprehension and emotional health — presented by the internet and its social media façade or point of entry as hyperreal, let’s get back to it.

Our social media is run by algorithms, a set of instructional shortcuts (of the “and,” “or,” or “not” variety) which might be visualized on a spectrum between basic chalkboard formulas and the backbone of AI — that backbone being the set of instructions which direct an AI to problem-solve and to learn from its efforts.

And a major socioeconomic element of platformed user ‘engagement’ which goes collectively misunderstood is the capacity of narrow AI (as structured by algorithms) to outcompete humans at any number of respective tasks while simultaneously using human data to make the effort.

Let me restate that.

As artificial intelligence develops (in line with its algorithmic directives), producing massive profits for the comparatively slim range of society initiated into the business-facing aspects of the platformed internet, it is — in its current context — directed to hold the attention of users so that they may produce more data which will further train the AI and further generate already outlandish wealth.

You’re addicted to Snapchat because it’s profitable.

Your addiction is a design element.

It produces money for people and resources for AIs, which produces more money for people and resources for AIs, which . . . déjà vu, ad nauseum.

Consider what Aza Raskin, leader of the Firefox development team and creator of infinite scrolling, suggests, that the methods involved in keeping our eyes on our screens amount to “hacking our neuroplasticity.”

As AIs have been trained to defeat the most exceptional human performers of chess and go, what does it mean to have a program, at least quantitively intimate with human behavior and models of addictive environments, outcompete you for your attention?

As I was pondering this myself, I suddenly pictured the definitive scene of The Matrix film, the exact moment at which its titular architecture and its motivations are precisely defined — when we get the straight answer to “What is The Matrix?”.

[SPOILER] Morpheus, the archetypal mentor in Neo’s Campbellian journey, explains to our confounded hero that The Matrix is a computer program which exists to distract and entertain the minds of human beings while their energy (bioelectrical information) is harvested to maintain the life of machines. [END SPOILER]

Cue product bell.

It occurred to me then to wonder, would it be appropriate, when you find yourself unable to resist the pull of another data-driven social platform, to describe that experience as being ‘farmed’?

To state the consideration once again, our eyes are pinned to our many screens because the duration of our attention corresponds positively to data collection and distribution for profit (the externalities motive of networked capitalism).

And the ever-complexifying goal-oriented roots of our platforms themselves are employed to provide the methods of entrapment: the shiny things, the provocative things, the levers in the rat cage online, essentially.

Largely unsuspecting, we are being farmed for data.

Every time you pick up your phone to check a work message and find yourself, 90 minutes later, shopping for Crocs or watching videos of drunk people popping zits at Coachella or . . . whatever, The Matrix has you — forrr reals.

While AI and its requisite brand of algorithm present a myriad of benevolent and downright miraculous potentials for the human race, they have also disrupted the methods of our democratic systems and are redefining the dynamics of the stock market, the terms of military engagement, methods of pair-bonding, and, as we’ve discussed, the most visible identifiers of human effort and industrial force.

We may yet make any imperative corrections and discover that we are better off with life disrupted.

Or, to use the imagery in this article’s title, if the vase breaks as predicted, it may not be cause for so much dismay.

But, if we are consistent in utilizing a Matrix analogy — and, in the way that it is ultimately a story of redemption — , we will likely need to be prepared to clean up a mess.

So we may still need to address having trained an interconnected legion of machined ‘intelligences’ (keeping in mind that the intelligence descriptor used at this time is still predominantly an anthropomorphization) to exploit our characteristic vulnerabilities and biases, and to develop with as much efficiency as possible, using the runoff from our hacked attention (data) as fuel.

It is genuinely not my intention for any of that to come off as alarmist.

I consider this an incomparably exciting time to be alive and suspect that every point of human civilization has carried with it a sense of impending doom — though I think one way that we evade such doom is by heeding the anxiety that its contemplation produces.

As such, it may be a problem that, on the whole, all this disruption translates as sometimes frustrating, but mostly unimpressive.

Perhaps all of the profanity, absurdity, outright stupidity, and everydayness of the content to which we are methodically drawn, the content which defines many of our conceptions of the web in total, serves as a brand of camouflage for its consequential profundity.

In the same way that we strain to connect our hours of cat-video viewing to the advancement of surgery bots or implications of national power, it is difficult to follow the rationale which highlights the amusement park of social media/the accessible internet as a vital influence in the lofty terrains of philosophy, economy, society, or humanity.

And yet, its influence remains.

Here may be the greater condition for which we require a Matrix lexicon, for which we have extracted — culture-wide — references to a film about AIs who distort apprehensions of the real in the process of farming humans for power.

If this appears currently melodramatic, consider it just an anticipatory accounting for Moore’s law.

In any case, solutions may be likely to emerge.

Lanier suggests in his popular writings that while, rather than the platforms themselves, we have become the product, our participation or lack thereof in our own commodification can drive restructure where needed.

In other words, if you catch your media platforms being creepy, delete your account.

Raskin implores us to possess the self-awareness to invest with our attention in what he calls “human protective design” in architecting new technologies — especially intelligent ones — , that we may counter for the known quantities of our hackability.

Restated, we must accept that we are easy to manipulate, especially in current technological terms, and so actively approve the most transparent and beneficial manipulations.

It seems inherent in this suggestion that the election of our interfaces must necessarily then precede our political elections — and possibly our more individual decisions.

As it stands, our platforms have direct effects on the way we choose our governments and how we tend to our own well-being.

And, as this article is folded around the analysis of a film, it is relevant to hope that, also in the way of a solution, we may balance out that particular bias of insistence on manifesting the obstacles we encounter in our movie screens.

We may be thankful, if The Matrix has had such an influence as proposed, that it is a story ultimately, again, of redemption and not resolute failure like 1984 or other fictive works of warning which we seem insistent on realizing.

Still, let us for once accept willingly from our cautionary media a demand and a directive of caution.

In the 20th year since the release of The Matrix, it is clear that the earnest curiosity behind at least some of the traffic to the early may be valuable to us in reinvocation.

As sure and as savvy as we often feel, a clear message we can derive from the film and its cultural insistence is to remain collectively aware of our technology and its influence.

Regarding the persistent dramatic query of “What is The Matrix?”, we can now lucidly and confidently provide an answer. But, in light of present evidence, how many of us can really explain the deceptively simple and now infrequently posited: What is the internet?

The Matrix’s foresight-on-the-verge-of-clairvoyance and its historical magnetism appear, in some hyperbole, as a sci-fi trope in itself, like an object emerging in the past by the efforts of some collective future.

And as the film is, in many ways, one about the interplay between predestination and choice — both in a grander philosophical sense and in matters of the everyday –, the message to ourselves in adopting it so thoroughly may be that we can choose the more brilliant fate.

It is eminently sensible that the gods associated with various cultures’ technologies are routinely the tricksters, and, at their cores, bringers of fire.

The Matrix may be, resolutely, the tale of the often destructive power which also lights the way of our human endeavor. And it can be read as suggesting the stance from which the latter of its nature may be encouraged above the former.

If we remember to closely regard the joint forces of narrative and technology and to insist on reformation where they stray from our service and from our real values, perhaps we can securely and lastingly empower ourselves with the heroic responsibility to remain human and awake among our creations.

“I don’t know the future . . .

I didn’t come here to tell you how this is going to end. 
 I came here to tell you how it’s going to begin.”

— Neo

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.