There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own.
- Heraclitus, 2500 years agoWe’re an empire now, and when we act, we create our own reality.
- Unknown official in the George W. Bush administration, 20 years ago
Do you feel that people you love and respect are going insane? That formerly serious thinkers or commentators are increasingly unhinged, willing to subscribe to wild speculations or even conspiracy theories? Do you feel that, even if there’s some blame to go around, it’s the people on the other side of the aisle who have truly lost their minds? Do you wonder how they can possibly be so blind? Do you feel bewildered by how absurd everything has gotten? Do many of your compatriots seem in some sense unintelligible to you? Do you still consider them your compatriots?
If you feel this way, you are not alone.
We have come a long way from the optimism of the 1990s and 2000s about how the Internet would usher in a new golden era, expanding the domain of the information society to the whole world, with democracy sure to follow. Now we hear that the Internet foments misinformation and erodes democracy. Yet as dire as these warnings are, they are usually followed with suggestions that with more scrutiny on tech CEOs, more aggressive content moderation, and more fact-checking, Americans might yet return to accepting the same model of reality. Last year, a New York Times article titled “How the Biden Administration Can Help Solve Our Reality Crisis” suggested creating a federal “reality czar.”
This is a fantasy. The breakup of consensus reality — a shared sense of facts, expectations, and concepts about the world — predates the rise of social media and is driven by much deeper economic and technological currents.
Postwar Americans enjoyed a world where the existence of an objective, knowable reality just seemed like common sense, where alternate facts belonged only to fringe realms of the deluded or deluding. But a shared sense of reality is not natural. It is the product of social institutions that were once so powerful they could hold together a shared picture of the world, but are now well along a path of decline. In the hope of maintaining their power, some have even begun to abandon the project of objectivity altogether.
Attempts to restore consensus reality by force — the current implicit project of the establishment — are doomed to failure. The only question now is how we will adapt our institutions to a life together where a shared picture of the world has been shattered.
This series aims to trace the forces that broke consensus reality. More than a history of the rise and fall of facts, these essays attempt to show a technological reordering of social reality unlike any before encountered, and an accompanying civilizational shift not seen in five hundred years. ♣
Read the list of statements below.
For each statement, write down whether it is true or false, and whether the issue is very important or not that important.
Pick the friend whose political beliefs are most different from yours, and to whom you are still willing to speak. Ask your friend to complete the questionnaire too. Explain to this friend why his or her answers are insane.
On the recent twentieth anniversary of 9/11, I reflected on how I would tell my children about that day when they are older. The fact of the attacks, the motivations of the hijackers, how the United States responded, what it felt like: all of these seemed explicable. What I realized I had no idea how to convey was how important television was to the whole experience.
Everyone talks about television when remembering that day. For most Americans, “where you were on 9/11” is mostly the story of how one came to find oneself watching it all unfold on TV. News anchors Dan Rather, Peter Jennings, and Tom Brokaw, broadcasting without ad breaks, held the nation in their thrall for days, probably for the last time. It is not uncommon for survivors of the attacks to mention in interviews or recollections that they did not know what was going on because they did not view it on TV.
If you ask Americans when was the last time they recall feeling truly united as a country, people over the age of thirty will almost certainly point to the aftermath of 9/11. However briefly, everyone was united in grief and anger, and a palpable sense of social solidarity pervaded our communities.
Today, just about the only thing everyone agrees on is how divided we are. On issue after issue of vital public importance, people feel that those on the other side are not merely wrong but crazy — crazy to believe what they do about voter ID, Russiagate, critical race theory, pronouns and gender affirmation, take your pick. Americans have always been divided on important issues, but this level of pulling-your-hair-out, how-can-you-possibly-believe-that division feels like something else.
It is hard to imagine how we would have experienced 9/11 in the era of Facebook and Twitter, but the pandemic provides a suggestive example. Just as in 2001, in 2020 we faced a powerful external threat and had a government willing to meet it. But instead of unity, American society has experienced tremendous fragmentation throughout the pandemic. Beginning with whether banning foreign travel or using the label “Wuhan virus” was racist, to later mask mandates, school closures, lockdowns, and vaccine requirements, we googled, shared, liked, and blocked our way apart. Nobody was tuning in to the same broadcast anymore.
Of course, we have heard no end of laments for the loss of the TV era’s unity. We hear that online life has fragmented our “information ecosystem,” that this breakup has been accelerated by social division, and vice versa. We hear that alienation drives young men to become radicalized on Gab and 4chan. We hear that people who feel that society has left them behind find consolation in QAnon or in anti-vax Facebook groups. We hear about the alone-togetherness of this all.
What we haven’t figured out how to make sense of yet is the fun that many Americans act like they’re having with the national fracture.
Take a moment to reflect on the feeling you get when you see a headline, factoid, or meme that is so perfect, that so neatly addresses some burning controversy or narrative, that you feel compelled to share it. If it seems too good to be true, maybe you’ll pull up Snopes and check it first. But you probably won’t. And even if you do, how much will it really help? Everyone else will spread it anyway. Whether you retweet it or just email it to a friend, the end effect on your network of like-minded contacts — on who believes what — will be the same.
“Confirmation bias” names the idea that people are more likely to believe things that confirm what they already believe. But it does not explain the emotional relish we feel, the sheer delight when something in line with our deepest feelings about the state of the world, something so perfect, comes before us. Those feelings have a lot in common with how we feel when our sports team scores a point or when a dice roll goes our way in a board game.
The unity we felt watching the news unfold on TV gave way to the division we feel watching events unfold online. We all know that social media has played a part in this. But we should not overestimate its impact, because the story is much bigger. It is a story about the shifting foundations of reality itself — a story in which you and I are playing along.
Hello again. I hope you and your friend are still on speaking terms after our fun collaborative activity. Now let’s try something completely different. Follow me, if you will, into dreamland.
Last week, I saw an ad for a movie in the newspaper, with an old clock in the background. But something was funny about it. The numbers were off — 2, 0, 2, 7….
Could this be a phone number? I tried dialing the first ten numbers, and got an automated message of a woman’s voice in a nebulous accent reading another series of numbers. I have been pulling at the thread ever since.
I did some digging and found out that I’ve stumbled on a kind of game: an alternate reality game. I’m guessing you may not have heard of them?
Alternate reality games are a lot like reading Agatha Christie or Sue Grafton or watching Sherlock. There is something deeply satisfying about unraveling a mystery story when we’re taken in by it. No one has really been murdered, but we still feel suspense until the puzzle is solved.
A good mystery writer will hide the clues in plain sight. She doesn’t have to do that. She could just describe how the detective solves the case. But she does it because she knows that we want to see if we can figure it out ourselves!
Now what if you could do more than just follow along with the story? What if you could actually be the detective? Say you notice a clue and you figure out what it means, and it tells you to look for a hidden message in a classified ad in tomorrow’s newspaper. The message tells you to go to a local bakery tomorrow at noon to find another clue.
That’s what happens in an alternate reality game. It’s a story that you play along with in the real world. It’s like an elaborate scavenger hunt, on the Internet and in real life, with millions of other people all over the world playing along too.
Speaking of which, I have a hunch what the numbers from the lady on the phone mean, but I can’t say anything else on an open channel. If you want to help us solve the puzzle, our Signal chat link is ⬛⬛⬛⬛⬛.
During the Trump era, as a wider swath of people began to pay attention to the online right, a group of game designers noticed disturbing parallels between QAnon, with its endlessly complex conspiracy theories, and their own game creations. Most notably, in the summer of 2020, Adrian Hon, designer of the game Perplex City, wrote a widely shared Twitter thread and blog post drawing parallels between QAnon and alternate reality games.
Theory: QAnon is popular partly because the act of “researching” it through obscure forums and videos and blog posts, though more time-consuming than watching TV, is actually more enjoyable because it’s an active process.
— Adrian Hon (@adrianhon) July 9, 2020
Game-like, even; or ARG-like, certainly.
An alternate reality game begins when people notice “rabbit holes” — little details they happen across in the course of everyday life that don’t make sense, that seem like clues. Consider the game Why So Serious?, which was actually a marketing campaign for the 2008 Batman movie The Dark Knight. The game started when some fans at a comic book convention found dollar bills with the words “why so serious?,” and George Washington defaced to look like the Joker. Googling the phrase led to a website … which directed players to show up at a certain spot at a certain time … where a skywriting plane appeared and wrote out a phone number … which led to more clues. Eventually you found out that there was a war going on between the Joker’s criminal gang and the Gotham Police.
The “game masters” don’t necessarily write out the whole story in advance. They might make up some parts of it as they go, creating clues in response to what players are doing. Some games offer prizes, like coordinates to a secret party. But really, the reward is just the satisfaction of solving the mystery.
The structural similarities between all this and QAnon, the game designers thought, were remarkable. In QAnon, too, the rabbit holes can be anywhere: YouTube videos, believers carrying signs at Trump rallies with phrases only other followers would recognize, or enigmatic posts on online message boards. QAnon, of course, also has a game master: Q, the unidentified person behind the curtain. Although he or she has lately been silent, Q used to send regular messages, which pointed to leaked emails, obscure news stories, and numerological puzzles.
Like in QAnon, this blending of online and offline is typical for ARGs. Players navigate a thicket of websites, email accounts, even real phone numbers or voicemail accounts whose passwords you have to figure out. The game masters might send you an email from a character, leak a (fake) classified document on an obscure website, or send you to a real-world dead drop to find a USB drive. At one point in the Batman game, players were directed to a specific bakery, where they could give a name from the game and pick up a real cake with a real phone buried inside.
With both QAnon and alternate reality games, it can be hard to tell what is and isn’t “real.” Of course, QAnon followers think that their world is the real world, whereas ARG players know they are in a game. That’s an important difference. But the point of an alternate reality game is also to blur the boundaries of the game. In fact, many use a “this is not a game” conceit, intentionally obscuring what is real and what are made-up parts of the game in order to create a fully immersive experience.
Unlike role-playing games, in an alternate reality game you play as yourself. Part of what’s so much fun is the community that forms among players, mostly online. For devoted players, status accrues to finding clues and providing compelling interpretations, while others can casually follow along with the story as the community reveals it. It is this collaboration — a kind of social sense-making — that builds the alternate reality in the minds of players.
Likewise, once you get interested in QAnon, there is a rich community built through video channels, discussion forums, and Facebook groups. Late-night chats and brainstorming sessions create an atmosphere of camaraderie. Followers make videos and posts that provide compelling interpretations of clues, aggregate the best ideas from the message boards, and simply entertain others playing in the same sandbox. Successful content creators gain social status and make money from their work.
Most of all, QAnon followers find deep personal satisfaction, achievement, and meaning in the work they are doing to trace the strings to the world’s puppeteers. As the journalist Anne Helen Peterson wrote on Twitter: “Was interviewing a QAnon guy the other day who told me just how deeply pleasurable it is for him to analyze/write his ‘stories’ after his kids go to sleep.” That thrill is not unlike what you feel when you play an alternate reality game.
Maybe this idea that QAnon is like an alternate reality game was just a wild theory too. ARG designers and players may be prone to overestimate the importance of parallels, seeing clues where there are none. Or maybe it was prophetic, considering the role that QAnon adherents would play in the U.S. Capitol attack just a few months after this idea garnered widespread attention. Indeed, there is a case — and I am going to make it here — that the parallel can be fruitfully extended much farther.
Adrian Hon points in the right direction:
I don’t mean to say QAnon is an ARG or its creators even know what ARGs are. This is more about convergent evolution, a consequence of what the internet is and allows.
In other words, the similarities between QAnon and alternate reality games do not owe to something uniquely insane about Q followers. Rather, Hon says, both are outgrowths of the same structural features of online life.
Hon writes that in alternate reality games, “if speculation is repeated enough times, if it’s finessed enough, it can harden into accepted fact.” And Michael Andersen, a writer who has dissected ARGs since the aughts, describes the appeal of seeing the finished game this way: “All of the assumptions and logical leaps have been wrapped up and packaged for you, tied up with a nice little bow. Everything makes sense, and you can see how it all flows together.”
Does this sound familiar? If you had encountered out of context the paragraph you just read, what would you think it was about? Widely held beliefs on Russiagate, perhaps? On the origins of the coronavirus? The 2020 election results? Covid hysteria?
Okay, time for another quiz. Read the list of statements below:
Each of these statements is based on information that has been reported as true by credible mainstream outlets (MIT Media Lab, NBC News, Newsweek, The New Yorker).
Which statement feels most telling — like it speaks to a much bigger story that demands further investigation? Pick one and do the research.
In October 1796, a report appeared in Sylph magazine that sounds peculiar to us today:
Women, of every age, of every condition, contract and retain a taste for novels…. The depravity is universal…. I have actually seen mothers, in miserable garrets, crying for the imaginary distress of an heroine, while their children were crying for bread: and the mistress of a family losing hours over a novel in the parlour, while her maids, in emulation of the example, were similarly employed in the kitchen…. with a dishclout in one hand, and a novel in the other, sobbing o’er the sorrows of Julia, or a Jemima.
Though this may seem silly now, there is reason to think that the eighteenth-century British moralists who panicked over the spread of a new medium were not entirely wrong. Defoe’s Robinson Crusoe, Voltaire’s Candide, Rousseau’s Emile, and Goethe’s The Sorrows of Young Werther, exemplars of the new modern literary form known as the novel, were more than just great works of art — they were new ways of experiencing reality. As literary critic William Deresiewicz has written, novels helped to forge the modern consciousness. They are “exceptionally good at representing subjectivity, at making us feel what it’s like to inhabit a character’s mind.”
Perhaps even revolution was the result. Russian revolutionary activity, in particular, was inextricably tied up with novels. Lenin wrote about Nikolay Chernyshevsky’s novel What Is to Be Done? that “before I came to know the works of Marx … only Chernyshevsky wielded a dominating influence over me, and it all began with What Is to Be Done?,” and that “under its influence hundreds of people became revolutionaries.” He later borrowed the novel’s title for his own 1902 revolutionary tract.
In our day, the departure from consensus reality began in innocent fashion, and with a different genre of entertainment: with wizards and dice rolls in 1970s basements. Board games, war games, and fantasy novels had all been around for a long time. What role-playing games like Dungeons & Dragons pioneered was using the same gameplay mechanics not to fight tabletop wars but to tell stories, centered not on armies but on individual characters of a player’s own creation. The point of playing was not to beat your opponent but to share in the thrill of making up worlds and pretending to act in them. You might be an elven warlock rescuing a maiden, or a dwarven paladin breaking out of a city besieged by orcs. Or you might be the “dungeon master,” the chief storyteller who decides, say, whether the other players encounter a dragon or a manticore.
The role-playing game is to our century what the novel was to the eighteenth: the social art form epitomizing and evangelizing a new mode of self-creation. Role-playing games became especially popular in the 1980s, fostering a moral panic over the corruption of the youth, and their influence has continued to vastly exceed that of table-top games.
As soon as the scientists, students, and computer hobbyists who loved Dungeons & Dragons began connecting with each other through what would come to be called the Internet, they began to play games together. On top of the early text-based online world, they created chat protocols for role-playing games. It was an early form of what Sherry Turkle called “social virtual reality.”
Many of the systems we now use online have their structural origins in the world of role-playing games. Video games of all sorts borrow concepts from them. “Gamified” apps for fitness, language learning, finance, and much else award users with points, badges, and levels. Facebook feeds sort content based on “likes” awarded by users. We build online identities with the same diligence and style with which Dungeons & Dragons players build their characters, checking boxes and filling in attribute fields. A Tinder profile that reads “White nonbinary (they/her) polyamorous thirtysomething dog mom. Web-developer, cross-fit maniac, love Game of Thrones” sounds more like the description of a role-playing character than how anyone would actually describe herself in real life.
Role-playing games combined character-building, world-building, game masters telling stories, creating puzzles, and rules for scoring points and making decisions — all for having fun with friends in an imagined world for a little while. Could we have imported online all of these tools for building alternate realities without getting sucked into the game?
Several weeks have gone by since you picked your rabbit hole. You have done the research, found a newsletter dedicated to unraveling the story, subscribed to a terrific outlet or podcast, and have learned to recognize widespread falsehoods on the subject. If your uncle happens to mention the subject next Thanksgiving, there is so much you could tell him that he wasn’t aware of.
You check your feed and see that a prominent influencer has posted something that seems revealingly dishonest about your subject of choice. You have, at the tip of your fingers, the hottest and funniest take you have ever taken.
Digital discourse creates a game-like structure in our perception of reality. For everything that happens, every fact we gather, every interpretation of it we provide, we have an ongoing ledger of the “points” we could garner by posting about it online.
Sometimes, something will happen in real life that provides such an outstanding move in the game that it will instantly go viral. Conversely, we tend not to talk about things that are important but do not garner many “points.” So, for instance, there has been far less frothy discourse on Twitter and in the New York Times about the restoration of the multi-billion-dollar state and local tax deduction — conservatives give it only a few points for liberal hypocrisy, and for liberals it’s a dead-end — than about Alexandria Ocasio-Cortez’s “Tax the Rich” dress — lots of points in many different worlds.
Alternate reality games dictate what is and is not important in the unending deluge of information — what gets points and what doesn’t. What falls outside of or challenges the story of a given game is not so much disputed as ignored, and whatever fits neatly within it is highlighted. Wanting to understand the facts in perspective cannot alone explain the level of attention paid to vaccine complications, maskless people on planes, drag queen story hours, or school book bans by neofascist state legislatures (have I made everyone mad?). ARGs are not about establishing the facts within consensus reality. They are about finding the most compelling model of reality for a given group. If your ads, social media feeds, Amazon search results, and Netflix recommendations are targeted to you, on the basis of how you fit within a social group exhibiting similar preferences, why not your model of reality?
Perhaps this helps to explain why fact-checking seems so pitiably unequal to our moment. Yes, unlike a genuine game, QAnon followers assert claims about the real world, and so they could, in theory, be verified and falsified. It isn’t all confirmation bias — surprise is still possible: The Pizzagate believer who in 2016 brought a rifle to a D.C. pizza place to rescue child sex slaves from a ring believed to involve Hillary Clinton was genuinely shocked that the building didn’t have a basement. But ARGs can keep going because there are a myriad of possible solutions to puzzles in the game world. Debunking only ever eliminates one small set of narratives, while keeping the master narrative, or the idea of it, intact. For QAnon, or contemporary witchcraft, or #TheResistance, or Infowars, or the idea that all elements of American life are structured by white supremacy, one deleted narrative barely puts a dent in what people are drawn to: the underlying world picture, the big story.
Months have gone by since you went down the rabbit hole.
You are now an expert. You have alienated a few old friends … but made some great new ones, who get you better anyway.
Now consider the following statement:
The more I learn, the more astonished I am that everybody else isn’t taking this story as seriously as I am. My eyes keep opening while other people are going blind.
To play an alternate reality game is to be drawn into a collaborative project of explaining the world. It is to lose, even fleetingly, one’s commitment to what is most true in the service of what is most compelling, what most advances a narrative one deeply believes. It allows players to neatly slot vast reams of information into intelligible characters and plots, like “Everything that has gone wrong is the product of evil actors or systems, but there are powerful heroes coming to the rescue, and they need your help.” Unlike a board game, this kind of world-building has no natural boundary. Players can become entranced and awe-struck at the sheer scale of information available to them, and seek to assimilate it into building the grandest narrative possible. They try to generate a story in which all of the facts they have piled up make sense.
So what if an alternate reality game really did keep on going, if it had no end point? It would amount to a simulation of the world. All aspects of “reality” that fit into the simulation, including some produced artificially by players for fun and profit, would be incorporated. If the game had no boundary, at some point you could think that the world it is building simply is the world. In one early ARG, after the final puzzle had been solved, some participants winkingly suggested they next “solve” 9/11.
ARG game masters have described one of the pathologies of players as apophenia, or seeing connections that aren’t “really there” — that the designers didn’t intend — and therefore pursuing red herrings. In one game, in which players had to look for clues in a basement, some scraps of wood accidentally formed the shape of an arrow pointing to a wall. Players believed it was a clue and decided they needed to tear down the wall to find the next clue. (The game master intervened just in time.) But the difference between true and false interpretations exists only if the puzzle has one right answer, or one central authority — like J. K. Rowling intervening in fan debates about which Harry Potter characters are gay. The puzzle that today’s media consumers are trying to solve is the world, and interpretations are more or less up for grabs as long as they fit the story.
In a world in which we all play alternate reality games, we each pile up superabundant facts, theories, and interpretations that support the main narratives, and our allegiances gradually solidify as we consume and produce the game material. It’s not just interpretations of data that wildly diverge between different games, but also players’ sense of what is realistic or plausible — for example, their perceptions of the rates of homicides committed by police, or by illegal immigrants. This means that, in any crisis situation, the most narrative-enhancing reports will spread widest and fastest, regardless of whether they are overturned by later reporting. As L. M. Sacasas noted about the media experience of January 6, “a consensus narrative will almost certainly not emerge.”
The cynical reader might interject that the bygone era of mass media was not a golden age of truth, but was subject to its own overarching narratives and its own biased reporting. But what matters here is that mass media, rooted in an advertising business model and in broadcast technologies, created the incentives and capability for only a small number, perhaps even just one, of these narratives to emerge at one time. Both journalists and spin doctors attempted to massage or manipulate the narrative here or there, but eventually mass media converged on whatever the narrative was. In an age of alternate realities, narratives do not converge.
As the media ecosystem produces alternate realities, it also undermines what remains of consensus reality by portraying it as just one problematic but boring option among many. The process of arriving at this contrary view of the consensus — a process sometimes called “redpilling,” after The Matrix — goes something like this: A real-world event occurs that seems important to you, so you pay attention. With primary sources at your fingertips, or reported by those you trust online, you develop a narrative about the facts and meaning of the event. But the consensus media narrative is directly opposed to the one you’ve developed. The more you investigate, the more cynical you become about the consensus narrative. Suddenly, the mendacity of the whole “mainstream” media enterprise is laid bare before your anger. You will never really trust consensus reality again.
Opportunities for such redpill moments are growing in frequency: the 2016 presidential election, the George Floyd protests, masking and lockdowns, the crime wave in American cities. There were always chinks in consensus reality — think of the newsletters of the radical right or the zines of the leftist counterculture — but finding consensus-destroying information was costly. The process was unable to produce the real-time whiplash of today’s redpill moments. The speed at which events like these are piling up suggests that the change is structural, that it is the media ecosystem itself that is fundamentally transforming.
For our final game, please consider a troubling episode of dreampolitik from very recent American history.
Driven by devastation over the outcome of the presidential election, brought together by algorithmic recommendations on social media feeds, fueled by information overload, loosely organized by networks of influencers, egged on by massive ratings and follower counts, and strengthened in the loyalty that comes with telling an audience what it wants to hear, committed media game players created an alternate reality in which the good guys were working behind the scenes to bring down the bad guys. The story of secret activity that would end the hated presidency at any moment became detached from the actual government investigations underway, from verifiable facts, from discernible reality.
Is the paragraph above a description of …
Pick one.
My argument here is not that we are all the way into Wonderland, or even close to it yet. But that qualification should be as worrying as it is reassuring.
The change I am outlining is in most parts of the media world still fairly subtle — the addition of a new valence in how we see actors interpreting information, sharing content, and choosing what to emphasize. The real world still exerts hard pressure on the narratives people are willing to accept, and the realm of pure fantasy remains that of a small fringe. Yet while game-like media habits are easiest to see and most pronounced in Q-world, we can already see some of the same activities, engrossments, and intuitions that are involved in playing an alternate reality game creeping into the broader media ecosystem too — even in sectors that pride themselves on providing the sane alternative, the lone voice of reality. The point here is not to draw a moral equivalence, or to say that all these actors have lost their grip to the same degree, but rather to suggest a troubling family resemblance. The underlying structure of the reality-gamesmanship we find in, say, Infowars has its counterpart in, say, Trump-era CNN: incentives and rewards, heroes and villains, plotlines, reveals, satisfying narrative arcs.
To be a consumer of digital media is to find yourself increasingly “trapped in an audience,” as Charlie Warzel puts it, playing one alternate reality game or another. Alternate reality games take advantage of ordinary human sociality and our inherent need to make sense of the world. All it takes for the media environment to begin functioning like everyone is playing alternate reality games is:
Internet brain worms thrive on these ingredients. As long as spending more time consuming media — whether Facebook, MSNBC, talk radio, or whatever — increases the strength of one’s exposure, the worms will find their way. Reality as we understand it is a phenomenon of social structures, language, and shared processes for engaging with the world. Digital media is remaking all of these in such a way that media consumption more and more resembles the act of playing an alternate reality game.
The recent rise of subscription newsletters on the platform Substack has provided a powerful if depressing natural experiment of this phenomenon. Freddie de Boer and Charlie Warzel, both widely read commentators, have written about tinkering with their own Substack content, finding that calibrating posts to engage with Twitter controversies of the day led to exploding levels of clicks and new subscriptions, while sober, calm content was relatively ignored. Writers who get their income directly from subscriptions have every incentive to provide red meat day after day for some particular viewpoint. Sensationalism is of course as old as the news itself, but what targeted media like newsletters provide is the incentive to be sensationalistic for niche audiences. There is a reward for spinning alternate realities.
When writing for niche audiences, more status accrues to sharing narrative-enhancing facts and interpretations than to sharing what most of us can agree is reality. Those who quixotically hold on to the TV-era norms of balance and fact-checking won’t find themselves attacked so much as bypassed. By a process of natural selection, attention and influence increasingly go to those who learn to “speed-run through the language game,” to borrow from Adam Elkus, laying out juicy narratives according to the incentives of the media ecosystem without consideration of real-world veracity.
Business analytics will continue to drive this divergence. To illustrate the pervasiveness of this process, consider the logic by which The Learning Channel shifted from boat safety shows to Toddlers & Tiaras, and the History Channel from fusty documentaries to wall-to-wall coverage of charismatic Las Vegas pawn shop owners and ancient aliens theories. Content producers have an acute sense of which material gets the most views, the longest engagement, and the highest likelihood of conversion into subscriptions. At every step, every actor has the incentive to make the media franchise more of what it is becoming.
It’s an alien life form.
- David Bowie about the Internet, 1999
It is tempting to believe that, sure, other people are headed into Wonderland, but not me. I can see what’s happening. What if, say, you are not online, or don’t even pay much attention to the news? Even if your picture of the world is determined mainly by conversations with friends and family, you will find yourself being drawn into an alternate reality game, based on the ARGs they are playing. These games have “network scale” — they are more fun and powerful the more people you know are involved. This is also why it is becoming more and more difficult, and unlikely, for people playing different games to even talk to each other. Indeed, a common conceit of some media games is that “nobody is talking about this.” We are losing a shared language. It is not that we arrive at different answers about the same questions, but that our stories about the world have different characters and plots.
It is increasingly undeniable, looking at revealed preferences, that people can come to value their digital communities, relationships, and realities more than those of “meatspace,” as the extremely online call our enfleshed world. “For where your treasure is, there your heart will be also.” Every year, consumers spend billions of dollars on skins, costumes, and other “materials” in video games. Digital “property” like cryptocurrencies and non-fungible tokens have exploded. People attend events, show up at rallies, and even take vacations in order to post about them online. Many users on Reddit last year spent thousands of dollars on shares in the seemingly failing video game retailer GameStop, some declaring that they were prepared to lose the money, to send a message and garner status and make great “loss porn.” Recently, a player annoyed at the way the U.K.’s Challenger 2 tank was modeled in a video game posted classified documents to a game forum to make his point.
More than money, some participants in alternate reality games are willing to risk their lives and freedom. Over the past few years, Americans deeply immersed in their online versions of reality, driven by the desire to either influence them or create content, have: broken into a military facility, murdered a mob boss, burned down businesses, exploded a suicide car bomb, and stormed the Capitol.
Those who have studied the past should not be surprised. The most contested subjects in human history have arguably not been land or fortunes, but symbols, ideas, beliefs, and possibilities. As much blood has been spilled over products of the mind as of the body. The growing dominance of the Internet metaverse over “the real world” is just the next step in the story of man the myth-making animal.
You do not have to surrender your commitment to facts to participate in an alternate reality. You just have to engage with one, in any way. If you are a user of digital systems, if you allow them to provide you recommendations, if you train them on your preferences, if you respond in any way to the likes, downvotes, re-shares, and comment features they provide, or even if you are only a casual user of these systems but have friends and family and people you follow who are more deeply immersed in them, you are being formatted by them.
You will be assimilated. ♠
Jon Stewart has a dream where he walks out onto the brightly lit set of a new TV show. He has worked for years to build this show. It’s the answer to everything wrong with the news media.
For decades, Americans were fed a news diet of mass-produced garbage. O. J. Simpson, Monica Lewinsky, endless coverage of the Laci Peterson disappearance … hour after hour of filler. Talking points and “spin rooms” and canned zingers. Presidential aspirants doing eighth-grade debate theater. It was empty both-sides centrism. It didn’t speak to what mattered. It staged fake confrontations with powerful people to protect them from real accountability.
On this new show, yes, figures from across the political spectrum come to argue. But now it’s only real disagreement about the issues that matter to real Americans. No more treating politics like a staged wrestling match, only authentic single-warrior combat.
And the show does real reporting too, hard-hitting exposés on the issues other shows ignore. Reports about the billionaires who lined their pockets on the opioid crisis. Reports about warmongering elites lying so they can send working-class kids to die across the world. Reports about ideological indoctrination in public schools.
Gone are the manufactured news cycles on gaffes and the horse race. This show doesn’t let politicians duck behind talking points, but makes them say where they really stand on the issues. Gone is the view from nowhere, which was just a cover for manipulative slant. The host tells you straight what he really thinks and why it matters.
The old world of journalism is finally dying and this is the new one. Corporate shills didn’t think it could make money — and they didn’t want it to. They didn’t believe Americans would watch this. They didn’t believe in Americans at all.
But people are watching, by the millions. It’s raking in money hand over fist. Jon Stewart was right and the suits were wrong.
He saunters onto the set. He is ready to take his seat.
Only then, like Scrooge in the graveyard, does he see the name on the host’s chair. It isn’t him. It isn’t Stephen Colbert. It isn’t even Brian Williams. It’s a name from his past, a name that sends a chill down his spine and turns him pale. In his mind he hears a high-pitched whine and smells the sickly whiff of fresh bowtie.
How did Jon Stewart’s dream become his nightmare?
The problem was that he misunderstood what made the monolithic mass media world a financial success. He was convinced that you could keep all the business structures basically the same, and just replace the media’s phony reality with an authentic one. There would still be one huge audience, but now instead of being forced to crowd around a trough to guzzle slop, they would join together as one to break bread.
And nobody would have to worry about the money. The meal would be so nourishing, the conversation so lively, the feast so grand, that that part would just work itself out.
Stewart in his heyday was a man before his time. He wasn’t just a prophet of the new world to come; he was its chief architect. He would pioneer everything that made it work.
And he was dead wrong, too. In the world he was building, there would be no grand feast. As he tore down the pillars of the phony old consensus reality, he was laying the foundation for authentically fanatic alternate realities.
In our bizarro world, Jon Stewart’s fantasyland is real, and its king is none other than Tucker Carlson.
Even for those of us who lived through it, it is difficult to remember or explain what postwar America’s mega-saturated media monoculture was like, and how dramatic the shift away from it has been.
In the Nineties, the American mind was drenched in carefully constructed corporate messaging conveyed by a never-ending torrent of ephemeral media — countless TV channels, newspapers, magazines, and radio shows. Catchphrases, brand logos, product placement, and celebrity gossip were the psychic detritus. We were all swimming in it, and it was about nothing, nothing except itself.
Jon Stewart was not an activist when he took over The Daily Show on Comedy Central, a trifling basic cable channel, in 1999. But as a comedian he had a feel for the absurd, and nothing had grown more absurd in his mind than TV journalism, suffused with spin, fake debates, soft interviews, and celebrity politicians.
If you weren’t tuned in to the zeitgeist when the show mattered — roughly spanning the 2000 election to the Global Financial Crisis — here’s a refresher.
Stewart’s Daily Show was structured like a standard half-hour news show mixed with a talk show. It opened with an anchor at a news desk offering a quick rundown of the day’s top headlines. He then handed the show off to correspondents, who offered analysis from the studio, or produced on-location reported story segments in the style of weekly newsmagazine shows like 60 Minutes. Finally, the anchor interviewed a single subject at the anchor’s desk, in the style of Johnny Carson or Jimmy Kimmel.
The twist: It was fake. The rundown was actually a series of jokes about the headlines. The correspondents interviewed real people about real stories, but the setup was a gag, the correspondents acting in character to see what reactions they could get. And the whole thing was broadcast before a studio audience, who giggled and hooted and hollered like they were on Springer.
If you watched a field segment in the Stewart era and thought the correspondents were mocking real people, you were missing the point. They were making fun of themselves, their own characters. Each one was a send-up of a particular TV-journo type.
Rob Corddry, feigning ludicrous outrage at minor annoyances, was spoofing John Stossel’s blowhard “Give Me a Break” segment from 20/20. Stephen Colbert, the ultimate self-serious straight man, was aping Stone Phillips and Geraldo Rivera. “He’s got this great sense of mission,” an out-of-character Colbert said of Geraldo. “He just thinks he’s gonna change the world with this report.”
The brand Stewart had inherited was dedicated to little more than wacky jokes and pop-culture infotainment. Stewart would turn it into a tightly written, ironic ongoing commentary on politics and journalism. It became less about the news and more about news itself. It was about the pretensions and foolishness of the doofuses who said they were doing the real thing.
And the clips. The clips!
The feature that really made The Daily Show famous was its masterful use of archival video clips to reveal the hypocrisy of the chattering classes. Stewart would set his target on some party shill or professional talking head being condescending, self-important, dishing out blame, kissing whatever ring he’d been paid to kiss. And then the show would play a clip of the same talking head’s appearance on a C-SPAN 3 four-in-the-morning call-in show from ten years ago, back when he’d been paid to kiss another ring, saying the exact opposite thing.
There was a clip, there was always a clip. And our righteous host would send these hacks packing.
Through all this, certain public figures would be transformed into storylines with narratives and characters, with inside jokes and recurring bits. The media’s storytellers became the subjects of a theater of the absurd. It got so that when certain figures would show up in a segment, you knew you were about to witness them receive their just comeuppance, a great spectacle of spilled archival blood. The audience would titter in excited anticipation.
It was a delight to watch.
And it was a hit. In a glowing 2003 interview, Bill Moyers called Stewart “a man many consider to be the pre-eminent political analyst of our time,” a status he would enjoy for a decade.
But there was always a tension in the enterprise, a risk that in taking down these bloviating figures, his own head would grow too big. And the risk was made worse because, strangely, Stewart never understood the source of his success. At least, he acted like he didn’t.
At the heart of his crusade, as he saw it, was a fight with media’s corporate overlords over whether news had to be dumb to make a buck. Stewart had no desire to just make another TV show. At a time when most political comedy was aimed at personal quirks, gaffes, and scandals, he maintained, like his hero George Carlin, a high view of comedy as an art form for social commentary.
But, as he retells it, the suits believed that to be profitable, a late-night show had to focus on pop culture. Earlier this year, talking about how he came to The Daily Show’s helm, Stewart recounted what he said to them: “Let’s make a deal. Let me do the thing that I believe in. And if it sucks and it doesn’t sell you enough beer, you can fire me.”
With a crack writing team, a distinctive vision, and a stable of generational comic talent, The Daily Show sold beer and then some. By the end of his run he was personally earning $25 million a year, making him not just the highest-paid host on late-night television but possibly in the entire news business — higher than Letterman, than Matt Lauer, than Brian Williams. “We developed the thing that we believed in and the audience showed up.”
If you build it, they will come. The corporate idea that Americans wanted canned news instead of viewpoint journalism and hard-hitting interviews of politicians was a lazy excuse masquerading as a market analysis.
Call it Stewart’s Content Theory: The real reason conventional news sucked was, well, because it sucked. It was bad because nobody had tried to make it not bad. Maybe producers didn’t have the guts, maybe journalists were addicted to access, maybe it was just the inertia of the whole system. Maybe they needed a prophet to help them see the light. Whatever the case, the answer was simple: Instead of choosing to be phony and bad, they should choose to be real and good.
Nothing was stopping reporters from flipping this switch. If you had an authentic viewpoint that took the audience seriously, presented with boldness and creativity, you could both entertain and inform, and find enough advertisers to pay for it all. After all, The Daily Show did.
So if it was that easy, why wasn’t everyone else doing it too?
There was an alternative theory for why news was so terrible: The structure of the news business itself dictated what journalism could be.
The growing shallowness of American journalism had a surprising source: the fairness doctrine. The 1949 regulation is remembered today as a “both-sides” mandate, requiring that TV and radio broadcasters who gave air time to one side of a public controversy had to give equal air time to the other. But it had another, largely forgotten component: As a condition of their license, it required that broadcasters air news coverage in the first place. For the first several decades of television, corporations thus viewed their news divisions as a public service, a necessary cost of maintaining their overall brand, and a checkbox to ensure their legal right to operate.
As early as 1961, American historian Daniel Boorstin had raised the alarm about how the mandate to churn out news was warping our media diet. He coined the term “pseudo-event” to describe things that happen simply in order to be reported on. For Boorstin, the driver of pseudo-events was a reversal from gathering the news, as events dictated, to making the news, as a standardized product on a schedule.
The larger your news-making enterprise, the more events you needed to have to fill airtime. When you ran out of events of true public note, you turned to pseudo-events: interviews, press conferences, and other PR exercises, plus interpretations, analysis, and opinion about the same. Thus the mass-produced phoniness Stewart would lampoon.
Paradoxically, after the end of the mandate to make news there was actually an increase in coverage.
In 1987, the Federal Communications Commission abolished the fairness doctrine. But at the same time, cable was rising and content needed to be produced on an even greater scale. With greater competition among channels, and no more law that rationalized tolerating losses on news, the industry insisted that news divisions turn a profit like any other.
But producing the news had huge fixed costs. You needed studios, journalists, and correspondents around the globe, regardless of how many hours of coverage you aired. And so the simple solution to generating profits was to spread those costs out, producing more news across more channels within the same corporations.
The strategy worked. After the repeal of the fairness doctrine, the production of TV news exploded, from an hour or so a day on each of the big three networks to continuous, 24/7 coverage across multiple networks. NBC News had been losing $100 million a year when GE bought the network in 1986. By 1998, the division earned $200 million in profit, drawing revenue from its ratings-dominating broadcast programs Today, Dateline, and NBC Nightly News, its cable channels MSNBC and CNBC, and an online news venture with Microsoft.
Nineteen ninety-nine was what we might call the Year of Peak News, the year this media culture and the industry driving it were at their zenith. It was the peak of the Lewinsky plotline in the form of the Clinton impeachment trial, and the peak of the self-fueling pyre of journalistic importance surrounding it — the more that TV covered it, the more important it became. It was the year that the respected journalists Tom Rosenstiel and Bill Kovach, in their book Warp Speed, warned that competition for attention and ad revenue had created an alternative media reality entirely separate from what really mattered for the country. It was near the point that more Americans were working in media than ever had before or ever would again — industry employment peaked at 1.6 million in July 2000 and never returned.
Nineteen ninety-nine was also the zenith of a powerful counter-cultural backlash. The problem wasn’t just the news. It was how the entire consumer culture powered by mass advertising was rotting our souls and immersing us in unreality. Nineteen ninety-nine was the year American Beauty won Best Picture and The Matrix invited us to take the red pill. It was the year that Fight Club warned, “Advertising has us chasing cars and clothes, working jobs we hate so we can buy shit we don’t need…. We have no Great War. No Great Depression. Our great war is a spiritual war.” Call it the Year of Peak Fight-the-System.
Would reform be enough, or did there need to be revolution? In 1999, an influential group of activists argued that nothing could change until mass advertising’s grip on the news industry was broken.
That year, Kalle Lasn published his landmark book Culture Jam. Lasn, a founder of Adbusters magazine and a former market researcher, had come to see the stranglehold of corporate media as the meta-problem making progress impossible for left-wing movements. In the 1980s, he had begun trying to work within the system, buying short environmentalist and anti-consumerist commercials, only to be almost universally turned down by the networks.
Station owners’ reticence made sense. Newspapers, television, and radio all ran on advertising. Advertising existed to spread corporate brand messages, and its value was based on how much consumption it drove. As Lasn had shown, you couldn’t even buy airtime for a fair hearing of ideas that ran counter to the interests of advertising. Authenticity was impossible within the system. American culture had become a corporate product, sponsored by advertising-based mass media.
Lasn was joined by Naomi Klein, whose book No Logo was also published in 1999 — days after the “Battle in Seattle,” a protest against the World Trade Organization, began in earnest. All blamed the ravages of free trade and global finance on the corporate takeover of the public square. They proposed a set of “culture jamming” tactics to push back — protests and petitions, counter-marketing, brand hijacking, “subvertisements.”
Call this Lasn’s Structure Theory: The reason news sucked was that the economics of the news business required it to suck. The suits were right after all.
It’s hard to remember how pervasive the structural critique was in the Nineties, and to appreciate how thoroughly it has vanished from public life. Pretty soon you stopped hearing about how advertising and brands and consumerism were eroding civic life. Lasn’s Structure Theory faded away.
Perhaps that had something to do with that other notable event from The Year of Peak News: Jon Stewart’s ascension to the Daily Show throne.
On the surface, Stewart, with his counter-establishment, anti-corporate message, sounded a lot like these activists. But Stewart’s Content Theory was that you could stage a revolution against the system from within.
The showdown between legacy media and Jon Stewart’s real fake news insurgency reached a head in October 2004, when he sought out a duel with a show that stood for everything he reviled about political journalism.
The show was CNN’s Crossfire. It featured one liberal host and one conservative host who would debate the issues of the day. Usually they were joined by one liberal guest and one conservative guest who did more of the same. These talking heads recited their talking points, all nice and neat. Afterward they probably all headed out to the same cocktail party.
From the start of the segment, you could tell that the hosts didn’t stand a chance. They were dutifully antagonistic with Stewart, but they were tired old welterweight champs who didn’t see the new generation of fighter standing before them in the ring.
Stewart was as funny as ever, but this time without the ironic grin. His face was stony. He was mad.
“We need help from the media, and they’re hurting us,” Stewart implored the hosts. “It’s not so much that [this show] is bad, as it’s hurting America.”
“Let me get this straight,” the liberal host replied. “If the indictment is … that Crossfire reduces everything … to left, right, black, white. Well, it’s because, see, we’re a debate show.” “We have each side on, as best we can get them, and have them fight it out.”
Stewart would have none of it. “After the debates, where do you guys head to right afterwards?…. Spin alley.” He was talking about how, after formal debates, reporters would interview campaign flaks whose job was to argue why their guy had won, regardless of what had actually happened. “Now, don’t you think that, for people watching at home, that’s kind of a drag, that you’re literally walking to a place called ‘deception lane’?”
It wasn’t real debate — it was fake. “What you do is not honest, what you do is partisan hackery.” Instead of helping the people, “you’re helping the politicians and the corporations…. you are part of their strategies.”
In the countless retrospectives that have been written on the Crossfire showdown, which became one of the defining media moments of the 2000s, much has been said about Stewart’s hypocrisy. One of the hosts pointed it out: “You had John Kerry on your show and you sniff his throne and you’re accusing us of partisan hackery?” Stewart ducked this lamely: “You’re on CNN. The show that leads into me is puppets making crank phone calls. What is wrong with you?”
But what matters isn’t that Stewart was a hypocrite. It was how masterfulThe Daily Show was at leveraging this double standard.
In a monologue during the U.S. invasion of Iraq, Stewart joked that “our show obviously is at a disadvantage compared to the many other news sources that we are competing with…. For one thing, we are fake.” But of course the subtext ofThe Daily Show was that all TV news was fake news, and everyone else was just lying about it. Being honestly fake wasn’t a liability. It was a huge asset.
The Daily Show really was news. It covered the basic facts of the stories of the day. Its viewers were about as well-informed as those of broadcast or cable TV news. And surveys showed its newscasts were as trusted as many mainstream media sources.
And because it made no pretense of “fairness” or “objectivity,” it had an enormous advantage in competing for the eyeballs and allegiances of its young audience. Because its only explicit loyalty was to the laughs, the show could ignore “the news cycle” and focus on the stories that hit the right notes for its audience. Even as Jon Stewart fought a world of empty spin, he pioneered a model of television news where you didn’t need to manage the reporting, the sources, or the production of compelling televisual imagery. By wresting control of the context, you could bend it to your will and tell the story you wanted to tell.
The genius of The Daily Show wasn’t that it was great in spite of everyone else sucking; it was great because everyone else sucked. As long as there was an endless supply of garbage on hand — and boy, was there — you could do a show that was just about the garbage. And, perversely, you could be more successful talking about the garbage than making it.
Stewart was a critic of the system, and also its greatest dependent. Did he really not get this? For a moment in the Crossfire showdown, it seemed like he did: “The absurdity of the system provides us the most material…. the theater of it all.”
Three months after the matchup, Crossfire was canned. CNN’s new president, Jonathan Klein, said that he wanted to move the network away from “head-butting debate shows” and toward “roll-up-your-sleeves storytelling,” adding, “I agree wholeheartedly with Jon Stewart’s overall premise.”
But was the new model for news really just going to be a more earnest, informative version of the old?
You didn’t even have to listen to the Crossfire segment to know that it wasn’t just a drubbing but the birth of a new world. You could see it on the hosts’ faces. The establishment had lost the plot.
The liberal host, Paul Begala, kept trying to change the subject. On his face you could see the dumbstruck look of the compliant citizen murdered on the roadside by Anton Chigurh in No Country for Old Men. After Crossfire was canceled he never hosted his own television show again.
The old world was dying. You could ignore this and double down, or you could learn how to stand outside legacy media — and wield this to your advantage.
The conservative host tried valiantly, jousting like he was untouched. But as the segment wore on, his voice kept going higher, he sounded desperate. “I think you’re a good comedian,” he told Stewart. “I think your lectures are boring.” But by the end of the segment, you could see the wheels turning in his head.
His name was Tucker Carlson.
The Daily Show was a pioneer of the above-it-all style. But its weapon was not pointed parody alone.
When the Global Financial Crisis struck, Stewart was still at the height of his influence. Once again he saw a case of the media’s interests running opposite to the people’s. He was ready for another showdown.
In February 2009, CNBC pundit Rick Santelli had delivered an angry rant against a government bailout for “losers’ mortgages.” He floated the idea of holding a “Chicago Tea Party,” where “derivative securities” would be dumped into Lake Michigan. The diatribe would soon ignite the populist, right-wing Tea Party movement.
Stewart was righteous with indignation. Two weeks later, he aired clip after damning clip of CNBC pundits offering horribly wrong investment advice leading up to the financial crisis. He topped it off with a monologue indicting the chumminess, corruption, and self-dealing of finance and financial journalism.
One of those pundits, Jim Cramer, the host of CNBC’s Mad Moneyand the most famous financial pundit on TV, would defend his record on his show and write an op-ed calling for “a real debate.” Stewart then broadcast a patient and lethal blow-by-blow of Cramer’s advice to buy what would turn out to be rotten stock in Bear Stearns.
In March, within days of the stock market bottoming out, Cramer showed up for an interview on The Daily Show. In the segment — billed in news outlets as “Stewart vs. Cramer,” like a boxing match — Cramer got the treatment the audience knew was coming.
But there was a new twist. Usually, The Daily Show would air archival clips framed by Stewart’s solo narration. This time they were being deployed in real time against a subject who was sitting right there in the guest chair. Stewart would say something about Cramer’s chummy relationship to the financial system. Cramer would dodge and weave and try to recontextualize. And then Stewart would call out, “Roll 210,” and out would come some obscure footage of Cramer himself from his hedge fund days, peddling the very kinds of shenanigans that had led to the crisis. Over and over. It was devastating.
For the first time, facts had caught up to spin. Rather than leaving it at great gotcha TV, Stewart used the clips and his back-and-forth with Cramer to fuel a compelling cri de coeur about corruption on Wall Street.
Just like the Crossfire appearance, it was a beautiful thing to behold. How were the Daily Show staff so damned good at this? The question bedeviled Stewart’s competitors — one profile noted that “how the show’s producers find the source video for these elaborate montages has been a bit of a trade secret.”
What had created a culture of “just talking on TV without any accountability,” as one Daily Show writer put it, was not only the sheer volume and speed of the news. It was this true fact that will sound insane to anyone under the age of thirty: People on television reasonably assumed that no one would hear what they had said ever again.
As essayist Chuck Klosterman records in The Nineties: A Book, the key characteristic of twentieth-century media was its ephemerality. You experienced it in real time and internalized what was important and what it felt like. Then you moved on. “It was a decade of seeing absolutely everything before never seeing it again.”
People used to argue with their friends about the plot of a show or what the score had been in the ball game because, well, how were you going to check? Unless you had personally saved the newspaper or recorded it on your VCR, you would need to go to a literal archive and pull it up on microfilm.
TV news was even shakier, as networks often recorded over old tapes. Some of this footage only exists today because of the obsessive efforts of one Philadelphia woman who recorded news broadcasts on 140,000 VHS tapes over forty years.
And so, if you were a pundit or a commentator or a “spin doctor” PR flak, you could say whatever suited your needs at the moment, or even lie with impunity — as long as your lie did not become its own pseudo-event. Your lasting impact was whatever stuck in viewers’ heads and hearts. And if you changed your tune in the months or years afterwards, who would remember?
The Daily Show would remember.
The explosion of live broadcast and cable news had created a new, completely under-valued resource for whoever thought to harness it: catalog clips. Soon, new digital technology could preserve content in amber, allowing for its retrieval, repurposing, or referencing at any time.
Another signal event from 1999, the Year of Peak News: The first digital television recorder, TiVo, came onto the market. At a time when news networks sent out old footage by postal mail, the TiVo made it possible for Daily Show producers to record, catalog, and comb through hundreds of hours of footage a week. Their process became substantially more powerful in 2010 with the deployment of a custom, state-of-the-art system that both captured footage and converted its audio to text, allowing producers to search for just the right clips across the entire accumulated archive.
The Daily Show always portrayed itself as the David to the news establishment’s Goliath. But in the showdown, Cramer pointed out the advantages the show had when it came to the production mechanics of filling air time. “We’ve got seventeen hours of live TV a day to do…. I’ve got an hour, I’ve got one writer, he’s my nephew…. You have eighteen guys.”
Stewart liked to claim that they were just jokers, but really the joke was on everyone else. The Daily Show had figured out how to produce a real, high-quality newscast after all — without having to do any of its own reporting. Under the “fair use” copyright exception for parody, the show could simply steal whatever content it needed from its competitors.
The evening news had to send its correspondents to exotic locales. Sure, The Daily Show would sometimes do the same. But it also had a recurring joke about using the studio green screen to do its “field” interviews.
For legacy media, you needed to always be producing the news. The Daily Show’s incentives were reversed: It could dine out on a viral clip for weeks, and there was an ever-expanding universe of recycled material to work with and a bevy of writers to use them.
It wasn’t just the ironic style of the show, then, that allowed it to turn real people into characters in ongoing narrative arcs. It was their remarkable use of technology to build an ever-growing database of content. When Stewart later said, “We were parasitic on the political-media economy, but we were not a part of it,” he was only right about the first part.
Against spin and vacuity in political journalism, Jon Stewart harnessed the past as a weapon. It was The Daily Show, more than any other factor, that began the disciplining of American political culture with perfect digital memory.
Even as Stewart was turning the content and production models of legacy news into a joke, its revenue model was about to be destroyed too. In the 2000s, two major innovations on the Internet tanked the economic value of offering homogenized content to a mass audience.
In the Year of Peak News, there was only one game in town when it came to advertising. Producers of content, and advertisers along with them, were competing for as many eyeballs as possible. They were effectively trying to reach everyone. A publication made money based on how large a piece it cut out of this single massive pie.
It is often said that what destroyed the legacy advertising model was the Internet, but that isn’t quite true. The Internet operated for many years without touching the mass advertising business. That giant would eventually be felled by Google, but initially the company refused on moral principle to make money from advertising. In a 1998 paper, the company’s founders wrote that search engines funded by ads “will be inherently biased towards the advertisers and away from the needs of the consumers.” Instead, Google sold licenses to other companies to use its search technology. But amid the pressures of the dot-com bust, the company would backtrack and pursue a new revenue stream based on targeted search ads, launching Adwords in October 2000.
During the aughts, the new personalized digital advertising business made mass advertising ever less valuable. For businesses, targeted ads were much more effective than ads aimed more or less at everyone, even with the existence of niche publications and demographic segmentation. If you were a sports memorabilia company, why target everyone who read Sports Illustrated when you could directly target people who searched online for old MLB tickets?
In the Year of Peak News, there was also only one game in town when it came to how consumers got their information. For half a century, if you wanted to know what was happening, you had to buy the paper or sit through eight minutes of ads during the nightly news. There was a seller’s market for information, and so producers could make money just by providing access. Even local newspapers thrived on this scarcity — decades of vital local journalism was funded in large part by readers who just wanted to know whether the Packers won last night.
Even in the early 2000s, the legacy media business model was still protected by this moat. Yes, the Internet existed, but the world’s information was mostly not online, and it was not well organized. Most of the content you saw on the early Web was not yet compiled automatically. In many cases, curation — adding and organizing content — was still done, quite literally, by hand. Even “cutting-edge” search services like Ask Jeeves, which advertised its amazing question-and-answer technology, had to employ legions of writers to help produce answers to queries. Because Web 1.0 offered roughly the same information abundance as before, legacy media maintained its monopoly on attention, and subscription and classified and advertising dollars kept flowing.
All this too would change as Google and others pioneered the technologies for automatically collecting, aggregating, and curating information online. A decade later, nobody was buying the daily paper to find out the Packers score. Now all you had to do to find out what was happening was open Google News, and you had your pick of headlines from a thousand different sources. Why would you pay for any of them now?
These were the two pillars of the new media world: Personalized digital ad tech was destroying the value of the mass audience. And automatic aggregation was making information superabundant, and so far less valuable.
Mass advertising soon declined while online advertising boomed. Newspaper ad revenue reached a historic peak of $49 billion in 2005 before plummeting to just $20 billion in the following decade. In 2010, online ad sales surpassed newspaper ad sales for the first time.
If the media world wanted to survive, it would have to figure out how to turn lemons into lemonade — how to make money from a narrow rather than a mass audience, and how to harness massive amounts of cheap information and create something valuable on top of it.
Our fearless hero was set to lead the way.
Would you be surprised to learn that Jon Stewart was a fan of Roger Ailes? Fox News, Ailes’s brainchild, is the news organization Stewart has most consistently complimented for its focus and skill. Over the years he has depicted Ailes as effective, “brilliant,” and evil. What made Ailes a visionary, Stewart thought, was his power to divine the most compelling narratives for his audience, regardless of what the mainstream media was focused on, and to then get the entire network on the same page.
Increasingly, Stewart wondered what it would look like to have a “Roger Ailes of veracity” — a network mind who was brilliant at producing high-caliber entertainment, but whose lodestar was not conservative politics but the most important practical issues confronting the country.
What Stewart had already achieved was to break open what media critic George W. S. Trow called “the context of no-context” — the way that you couldn’t understand why any story was covered on TV except that itwas on TV.
The O. J. Simpson trial, the Monica Lewinsky scandal, the Laci Peterson case, Crossfire: None of them made sense as events of public importance for a great people. But they made perfect sense as great television.
If you watched American Beauty and Fight Club and The Matrix in the Year of Peak News, you probably felt that getting beyond this empty culture meant burning it down and replacing it with something brave, real, and true. But implicitly, this meant that it would have the same unified audience, the same mass audience, freed at last from slavery to soulless garbage.
The catch is that the mass and the garbage were one. The reason TV culture was so shallow was that it imposed over everything what Trow called the “grid of two hundred million,” that is, the number of Americans when he was writing in 1980. The business imperative was to grab as much of the television audience — singular — as possible. All content decisions flowed from this imperative.
Jon Stewart did not get this. He dreamed of a broad, hard-working, underserved middle of the country, hungry for the entertaining veracity he would produce. The idealized audience he often invoked was the silent majority of fundamentally decent Americans who were turned off by political extremism and partisanship for the sake of partisanship. In a 2002 interview he called this the “disenfranchised center” for “fairness, common sense, and moderation.”
This might be a fair description of the 1999 broadcast TV audience, but it was not a description of Stewart’s own viewers. Pew Research found that The Daily Show had one of the most liberal audiences of any show on TV, beaten only by Rachel Maddow’s, and no show’s viewers skewed more high-income or high-education.
Roger Ailes did get this. The Fox News business model was not actually aimed at conservatives, but at newly deregulated cable. Ad-funded broadcast TV had rewarded achieving the biggest audiences possible. But cable rewarded having loyal, consistent audiences who would clamor for access to their favorite channels, giving owners leverage to negotiate the highest subscriber fees from cable providers. In 2021 Fox’s cable division generated $3.9 billion in revenue from fees and only $1.3 billion in advertising.
This business model meant cultivating ties with a particular niche, providing them content they would find enthralling, and eventually building identities around media brands. When Fox News launched, American conservatives were the biggest distinctly underserved niche. But MSNBC and CNN would eventually follow in their footsteps, for the same reasons.
As in so many other things, Stewart was also a master innovator of what he claimed not to want: building a devoted niche audience.
When the mass audience produced by advertising melted down, the innovations in style and production that Stewart had pioneered were a ready-made answer for this crisis. They provided a template for how to be successful not with a mass audience but with a loyal fragment — by replacing the culture of mass media with meta-commentary on it, and the costly production of original news reporting with an efficient repurposing of others’ work.
But even this description undersells what made The Daily Show unique, and why it seemed to wax even as all other TV news waned. The real audience that sustained him assembled where all audiences assemble today: online.
Stewart understood that converting a loyal audience into a media-business success is not just about getting eyeballs in front of the set (the Friends model), or even getting people to pay you every month (Patreon and Substack). It’s about getting people to be personally loyal to you, to identify with your brand.
You could tell that Stewart understood this because of the way he used clips of his own show. In legacy television, you would never give away your content in a way that couldn’t be directly monetized, from which you weren’t getting a licensing buck or a bump in the Nielsen ratings. But The Daily Show just gave their stuff away. They knew that the Internet would help build loyalty — and eventually an audience — that would make Stewart powerful.
The Daily Show was the first TV show whose clips regularly went viral via forwarded emails, discussion forums, and shared BitTorrent links. Even as The Daily Show often had to send its interns to archiving services to pick up physical tapes of other shows, it was one of the first to host specific shareable clips on its website, alongside full episodes. It was the first whose producers grasped how free clips online could drive, rather than cannibalize, viewership numbers. Stewart’s appearance on Crossfire may have been the first piece of national political journalism to go viral online. And all this got underway before YouTube, whose first video was uploaded six months after the Crossfire showdown.
The audience of The Daily Show, and eventually its offshoot The Colbert Report, was thus significantly larger than what the TV ratings alone revealed. It was more personally loyal to Stewart and Colbert. It was an audience that shared clips, talked about them on Twitter and Reddit, and bought books. It was an audience that crowdsourced rides and couchsurf spots to show up in the hundreds of thousands for their satirical “Rally to Restore Sanity and/or Fear” in 2010. It was an audience that donated over a million dollars to a Stewart and Colbert–organized Super PAC, as a bit. These shows did not have a passive television audience, but one activated and organized by, with, and through the Internet. As Colbert said of the Rally: “They were there to play a game along with us.”
Contrast this with, say, David Letterman’s show, initially a rival to The Daily Show. Letterman had a cult following, and you could buy his merchandise and books. But the point of all that was to get you to watch the show. Everyone who bought Letterman’s Top Ten books watched the show.
But with The Daily Show, you could just watch clips when someone shared them, and not bother with the show. The books and rallies weren’t marketing gimmicks; they were a whole new ecosystem, of which the show was just one format. It’s hard to think of an earlier TV show that had such a loyal following of people who did not actually watch it on TV — Oprah had an empire, but her fans mostly watched her. Every show before Stewart’s you could understand as a TV show. But The Daily Show was an audience first and a TV show second.
You didn’t just watch The Daily Show; you subscribed to its worldview. Stewart hadn’t figured out how to build a revenue stream around subscriptions yet, but others soon would.
At first blush, the idea of audience-sponsored journalism sounds neither new nor worrying. Newspapers before the mid-nineteenth century were partisan outfits run on subscription revenue, not advertising. And surely news consumers have purer motives than faceless corporations do — especially if their support isn’t even commercial but philanthropic, like sponsoring a Patreon, public radio, or The Atlantic.
This intuition is wrong on two counts.
First, having to sell content to audiences — rather than selling eyeballs to advertisers — transforms the incentives reporters face. Advertisers like positive stories that put you in the mood to buy stuff, while readers and viewers respond more to fear- or anger-inducing stories. Other than on stories explicitly affecting their business, advertisers don’t really care what the news says, but readers like news that confirms what they already believe. Subscribers are more likely to keep paying if they feel emotionally engaged with the individual journalists they sponsor. And so on.
Second, there is a critical difference between the nineteenth-century model and audience-sponsored media today: You did still buy the newspaper in order to get the news. And while the paper might have offered a skewed interpretation of the facts at hand, it still needed to cover everything its rivals might, or else you might think you were not getting “all the news that’s fit to print.” Today, few people are paying mainly to get the news. On the Internet, facts about what is going on in the world are available in copious amounts and with enough variety to suit any taste.
Yes, “serious” journalists still report valuable stories and discover important facts. But where before this kind of public-interest journalism work was subsidized by profitable fluff news, today it is heavily funded by grants. And this kind of reporting, while valuable, hardly gains the kind of mass attention that it once did: Consider how little political energy has come from the astounding investigative journalism on the opioid crisis.
So if you can no longer sell the news nor advertising, what is it that subscription-based journalism sells today? Worldviews, interpretations, and the facts that support them. Today, journalists sell compelling narratives that mold the chaotic torrent of events, Internet chatter, and information into readily understandable plotlines, characters, and scenes. They do this by directing scarce newsroom resources to a handful of overarching stories, which are built through standard newspaper reporting, essayistic long-form installments, videos, podcasts, op-eds and, yes, reporters’ tweets. Like Scheherazade, if they can keep subscribers coming back for more of the story, they will stay alive.
The problem is not only that serious journalism cannot survive in the contemporary media business model. It is that in an era of subscription-based narrative production and superabundant facts, what journalism gets produced and shared comes down to which kinds of stories are needed for different narratives. The trouble is not that misinformation is destroying facts but that, even if all of the fact-checking procedures are followed to a letter, facts only matter now as far as they’re useful to monetizing a worldview.
Stewart’s theory of what made Fox News so effective — that it was narrative infotainment for a loyal niche audience — took explicit form in the spin-off show he produced, The Colbert Report. Launching in 2005, it featured formerDaily Show correspondent Stephen Colbert in-character as a bombastic conservative pundit, on the model of Fox News personality Bill O’Reilly.
In the premiere episode, Colbert’s character proclaimed himself the emperor of “truthiness.” “We are divided between those who think with their head and those who know with their heart,” he explained. The point was to offer the gentle viewers smug assurance that they still lived in the world of facts while the sheep over at Fox believed whatever felt right.
To illustrate, Colbert used a clip of President George W. Bush defending the nomination of White House Counsel Harriet Miers to the Supreme Court: “I know her heart.” But the quote was actually taken outrageously out of context. The punchline hit big, not because of anything brilliant Colbert had said, but because he understood exactly what his liberal audience already believed about the nomination. In calling out the scourge of truthiness, Colbert was being truthy.
Then again, wasn’t that what Stewart had been doing all along? Wasn’t it even what he promised?
What Stewart insisted on night after night was imposing his own context on the news, a narrative and meaning he decided. He argued that good journalism needed to do this too, that there was no such thing as “objective” journalism because the most powerful decision a journalist could make was to decide what was important.
Wrapped in layers of irony and humor, Stewart had already been relying on truthiness for years, reflecting the incredulity and ire his audience felt about politics and media back to them. The dark secret of Stewart and Colbert’s pastiche of conservative infotainment was that it only worked because their liberal audience wanted the same thing, adjusted for taste. “Anyone can read the news to you. I promise to feelthe news at you.”
What Stewart and Colbert didn’t realize was that they were prophets at the advent of what Andrey Mir calls “post-journalism.” The business model of mass-media advertising was dying and subscription-based journalism was rising in its place. The switch in business model would also mean the death of journalism’s old neutrality norms, and the need for new ones. Stewart and Colbert would help pioneer them.
While The Daily Show was helping to kill “spin” in the aughts, online political journalism was also forging a new coin of the realm: a consistent voice, a clear brand, authenticity, telling your audience where you were coming from, and a certain ironic detachment from any particular take. You could make fun of a journalist who posed as objective by exposing him as a partisan hack; it was a lot harder to generate comedy from a pundit doing exactly what you expected a pundit to do.
Out of the advertising cataclysm, there were two newspapers that pivoted in time — and then thrived. The New York Times and The Washington Post managed to substantially grow their revenue by pivoting to a subscription-based model, in everything that entails. In 2020, digital revenue overtook print revenue for the Times, completing its transformation into a self-described “digital-first, subscription-first company.”
Since the new business model does not especially depend on daily newspapers, the Times and Post are free to branch out and develop diverse ways of delivering content, including podcasts, documentaries, and magazines. The primary service provided is interpretation, laying out the puzzles, the narrative, and the clues to the “real” story. A 2017 internal report of the Times’s operations concluded that it spent too many resources on editing and not enough on “story sharpening.” In a 2020 job posting for a new Moscow correspondent, the paper did not require candidates to speak Russian, but did lay out at paragraph length their official storyline about “Vladimir Putin’s Russia.” The Times has doubled its Opinion staff since 2017 while cutting back on traditional shoe-leather desks like local reporting.
Even for the hard reporting that remains, what is really essential under the subscription model is narrative development. Because Stewart and Colbert were right. In the digital age, you really don’t need anyone to read the news to you. What you need is to understand how you should feel about it and what story it tells. For most readers, including many in journalism, the details will simply make no difference in their day-to-day lives. Presented with a massive overload of isolated facts, they will simply want to make sense of them. Helping them do that is the most valuable, and most revenue-generating, function of journalism today.
Finding the most powerful story to build out of carefully collected facts, building a compelling lattice of clips, reports, interviews, and takes that confirm your sense of everything that’s going wrong with the world: More than anyone else, the journalism world today is aping Jon Stewart, the man who figured out how to spin boring, ubiquitous, superabundant media straw into engaging, titillating, persuasive narrative gold.
In his dream, Jon Stewart flees the set of the show cast in his image of journalism: Tucker Carlson Tonight.
Bursting off the stage, he finds himself in a hallway of doors, each leading to a different set where authentic, post-spin, digitally-engaged news analysis can be found. As he runs, he passes by The Rachel Maddow Show, The Joe Rogan Experience, Full Frontal with Samantha Bee, The Lead with Jake Tapper, The Mehdi Hassan Show, Infowars, The ReidOut with Joy Ann Reid, Common Sense with Bari Weiss, and countless others.
Far, far down at the forgotten end of the hall, he stops before a door that reads The Problem with Jon Stewart and pushes through.
On a recent episode of The Problem with Jon Stewart, his new show on Apple TV+, Stewart devoted the whole show to the problems of the mainstream media. It was a rehash of all the old complaints: journalistic narcissism, the 24/7 news cycle, ratings-chasing, and now social media.
On the episode, a half-understanding Stewart listened as Bob Iger, who oversaw ABC News in one form or another for thirty-five years on his climb to being CEO of Disney, explained why the news can’t be fixed. “It’s almost impossible really, today … I don’t think you can create a subscription news service that would generate the kind of revenue you’d need to cover news right.”
But Stewart is still fighting the old fight. He blasts the media’s coverage of the Mueller investigation on the same grounds that critics had once blasted coverage of the Lewinsky affair: that the media had turned it into a TV show. The difference was that the Lewinsky affair was a show almost everyone in America watched, while only a fraction of the populace caught the Mueller show.
And that’s the catch — there isn’t just one big show anymore. And that’s because the market forces that Stewart mastered, the ones that let him take aim at greed, cynicism, and corruption, now let everyone else do the same. Jon Stewart became a Roger Ailes of veracity after all, but instead of everyone watching The Problem with Jon Stewart, hardly anyone is.
In 2003, he had told Bill Moyers that “news has never been objective.” Good reporting is faithful to the facts, but the journalist isn’t afraid to say what’s important — and to decide what’s worth reporting on. And that’s where we’ve arrived today: Nobody is afraid to say what’s important. You cannot sell news today without a point of view.
The tragedy is that this is not a world where that silent broad center of the country is getting the same straight dope. It’s one where everyone gets their own dope, tailor-made.
Today there is no central town Coliseum where everyone goes to watch the noble warriors compete. Instead, the landscape has been fractured, walls have gone up, an audience of half a million is now a feat.
And behind each of these walls, the ironies of the Stewart–Colbert project — the free distortion of reality to support the narrative that your foes have lost their grip, the pretense that you’re not doing politics but meta-politics, the delusions that come with believing you’re on a world-historical solo mission to save the American system — still bloom.
The tragedy of Jon Stewart was not that nobody listened to him but that everybody did.
As Stewart blasted away at the establishment in the 2000s, you would think its members would have hated him. But really they loved him. Bill Moyers treated him like a saint. Even poor Jim Cramer called Stewart his “idol,” just hours before his scheduled public evisceration at Stewart’s hand.
And that was because Stewart had the same high view of how journalism used to be that the old-timers did, the same hushed worship of Edward R. Murrow’s raised brow. Stewart spent his whole career railing against television, but he was actually the most nostalgic, backward-looking commentator of them all. You could have imagined a world where, when he finally made his promised serious turn, he just moved on to host a revamped Dateline or 60 Minutes.
That’s the funny thing about Stewart’s Daily Show. It had a reputation for flattering liberal pieties, and of course it did. But the show actually did not have much tangible politics — it mostly didn’t take stands on issues. Instead it was a critique of the form of political journalism that stayed serenely free of its messy content. As The New Republic’s Alex Shephard wrote of the Rally to Restore Sanity, it was “elaborate political theater that nominally appeals to better angels but really signals that liberals are smarter and gentler than conservatives and that, deep down, the rest of the country agrees with them.”
The irony was that in blowtorching TV journalism for having become just about TV, Stewart produced a decade of great TV journalism about … well, TV. The show allowed its audience to stand above the fray, to dodge substantive politics and instead identify with the conviction that they were the ones staying true to the spirit of America while the soulless media-political hacks sold out and the fanatics went off the rails.
Does that sound familiar? As Ross Douthat put it in 2010, he “proved that he can conjure the thrill of a culture war without the costs of combat, and the solidarity of identity politics without any actual politics.” But Douthat was actually describing a different event: Glenn Beck’s Restoring Honor rally, the one that Stewart and Colbert’s rally had aimed to spoof. Many of the ingredients of these dueling rallies — the stick-it-to-the-powers-that-be ethos, the insistence that there was a broad middle of the country who were on the side of the ralliers and against the establishment, even the furious disputes about crowd size — would be echoed a few years later in the rallies to Make America Great Again.
You can flatter your audience over and over every day by showing them proof of how insane the people they despise are becoming. You can tell them what they want to hear about how everyone else is just hearing what they want to hear. You can even build an entire new media business model on this.
But how long can you tell your audience they’re the only ones still living in reality before this idea just becomes an alternate reality of its own?
In 2020, Tucker Carlson Tonight surpassed Hannity to become the highest-rated prime-time cable news show, going on to break the record for highest-rated program in U.S. cable news history. More than any other pundit or comedian, he became the successor in stature and in style of Jon Stewart and Stephen Colbert, whose real lessons he studied better than anyone, and without pretense. Today, journalism is truthiness all the way down.
Carlson hosts hard-hitting interviews, field segments that underline the outrages and absurdities in American life as his audience sees it, and clip compilations that emphasize the show’s underlying themes. He is a master at sifting through masses of information to find the material that shows how hypocritical, foolish, and insane his adversaries are. Just like Stewart, Tucker has the receipts.
He makes no bones about where he’s coming from, routinely exercises editorial authority to highlight stories others have ignored, and takes as valid how his audience feels about the state of the world today, even as he often tries to adjust their understanding of the facts on the margins. His goal, and that of all his lesser imitators: to tell the story of America that weaves a compelling reality for his subscriber community.
The tragedy of it all is that this isn’t just a nightmare version of the world Jon Stewart dreamed of. It’s a world he built. In his quest to turn real news from the exception into the norm, he pioneered a business model that made it nearly impossible. It’s a model of content production and audience catering perfectly suited to monetize alternate realities delivered to fragmented audiences. It tells us what we want to hear and leaves us with the sense that “they” have departed for fantasy worlds while “we” have our heads on straight. Americans finally have what they didn’t before. The phony theatrics have been destroyed — and replaced not by an earnest new above-the-fray centrism but a more authentic fanaticism.
Jon Stewart pioneered “fake news” in the hope it would deliver us from the absurdities of the old media world. He succeeded beyond his wildest dreams. ♦
Facts, like telescopes and wigs for gentlemen, were a seventeenth-century invention.
- Alasdair MacIntyre
How hot is it outside today? And why did you think of a number as the answer, not something you felt?
A feeling is too subjective, too hard to communicate. But a number is easy to pass on. It seems to stand on its own, apart from any person’s experience. It’s a fact.
Of course, the heat of the day is not the only thing that has slipped from being thought of as an experience to being thought of as a number. When was the last time you reckoned the hour by the height of the sun in the sky? When was the last time you stuck your head out a window to judge the air’s damp? At some point in history, temperature, along with just about everything else, moved from a quality you observe to a quantity you measure. It’s the story of how facts came to be in the modern world.
This may sound odd. Facts are such a familiar part of our mental landscape today that it is difficult to grasp that to the premodern mind they were as alien as a filing cabinet. But the fact is a recent invention. Consider temperature again. For most of human history, temperature was understood as a quality of hotness or coldness inhering in an object — the word itself refers to their mixture. It was not at all obvious that hotness and coldness were the same kind of thing, measurable along a single scale.
The rise of facts was the triumph of a certain kind of shared empirical evidence over personal experience, deduction from preconceived ideas, and authoritative diktat. Facts stand on their own, free from the vicissitudes of anyone’s feelings, reasoning, or power.
The digital era marks a strange turn in this story. Today, temperature is most likely not a fact you read off of a mercury thermometer or an outdoor weather station. And it’s not a number you see in the newspaper that someone else read off a thermometer at the local airport. Instead, you pull it up on an app on your phone from the comfort of your sofa. It is a data point that a computer collects for you.
When theorists at the dawn of the computer age first imagined how you might use technology to automate the production, gathering, storage, and distribution of facts, they imagined a civilization reaching a new stage of human consciousness, a harmonious golden age of universally shared understanding and the rapid advancement of knowledge and enlightenment. Is that what our world looks like today?
You wake to the alarm clock and roll out of bed and head to the shower, checking your phone along the way. 6:30 a.m., 25 degrees outside, high of 42, cloudy with a 15 percent chance of rain. Three unread text messages. Fifteen new emails. Dow Futures trading lower on the news from Asia.
As you sit down at your desk with a cup of Nespresso, you distract yourself with a flick through Twitter. A journalist you follow has posted an article about the latest controversy over mRNA vaccines. You scroll through the replies. Two of the top replies, with thousands of likes, point to seemingly authoritative scientific studies making opposite claims. Another is a meme. Hundreds of other responses appear alongside, running the gamut from serious, even scholarly, to in-joke mockery. All flash by your eyes in rapid succession.
Centuries ago, our society buried profound differences of conscience, ideas, and faith, and in their place erected facts, which did not seem to rise or fall on pesky political and philosophical questions. But the power of facts is now waning, not because we don’t have enough of them but because we have so many. What is replacing the old hegemony of facts is not a better and more authoritative form of knowledge but a digital deluge that leaves us once again drifting apart.
As the old divisions come back into force, our institutions are haplessly trying to neutralize them. This project is hopeless — and so we must find another way. Learning to live together in truth even when the fact has lost its power is perhaps the most serious moral challenge of the twenty-first century.
Our understanding of what it means to know something about the world has comprehensively changed multiple times in history. It is very hard to get one’s mind fully around this.
In flux are not only the categories of knowable things, but also the kinds of things worth knowing and the limits of what is knowable. What one civilization finds intensely interesting — the horoscope of one’s birth, one’s weight in kilograms — another might find bizarre and nonsensical. How natural our way of knowing the world feels to us, and how difficult it is to grasp another language of knowledge, is something that Jorge Luis Borges tried to convey in an essay where he describes the Celestial Emporium of Benevolent Knowledge, a fictional Chinese encyclopedia that divides animals into “(a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, … (f) fabulous ones,” and the real-life Bibliographic Institute of Brussels, which created an internationally standardized decimal classification system that divided the universe into 1,000 categories, including 261: The Church; 263: The Sabbath; 267: Associations. Y. M. C. A., etc.; and 298: Mormonism.
The fact emerged out of a shift between one such way of viewing the world and another. It was a shift toward intense interest in highly specific and mundane experiences, a shift that baffled those who did not speak the new language of knowledge. The early modern astronomer and fact-enthusiast Johannes Kepler compared his work to a hen hunting for grains in dung. It took a century for those grains to accumulate to a respectable haul; until then, they looked and smelled to skeptics like excrement.
The first thing to understand about the fact is that it is not found in nature. The portable, validated knowledge-object that one can simply invoke as a given truth was a creation of the seventeenth century: Before, we had neither the underlying concept nor a word for it. Describing this shift in the 2015 book The Invention of Science: A New History of the Scientific Revolution, the historian David Wootton quotes Wittgenstein’s Tractatus: “‘The world is the totality of facts, not of things.’ There is no translation for this in classical Latin or Elizabethan English.” Before the invention of the fact, one might refer, on the one hand, to things that exist — in Latin res ipsa loquitur, “the thing speaks for itself,” and in Greek to hoti, “that which is” — or, on the other hand, to experiences, observations, and phenomena. The fact, from the Latin verb facere, meaning to do or make, is something different: As we will come to understand, a fact is an action, a deed.
How did you know anything before the fact? There were a few ways.
You could of course know things from experience, yours or others’. But even the most learned gatherers of worldly knowledge — think Herodotus, Galen, or Marco Polo — could, at best, only carefully pile up tidbits of reported experience from many different cultural and natural contexts. You were severely constrained by your own personal experiences and had to rely on reports the truthfulness of which was often impossible to check. And so disagreements and puzzling errors contributed to substantial differences of expert opinion.
For most people, particularly if you couldn’t read, what you experienced was simply determined by your own cultural context and place. What could be known in any field of human activity, whether sailing in Portugal or growing silkworms in China, was simply what worked — the kind of knowledge discovered only through painful trial and error. Anything beyond that was accepted on faith or delegated to the realm of philosophical and religious speculation.
Given the limitations of earthly human experience, philosophers and theologians looked higher for truth. This earthly vale of tears was churning, inconstant, corruptible, slippery. And so, theoretical knowledge of unreachable heights was held to be both truer and nobler than experiential knowledge.
If you had a thorny question about God or the natural order that you could not answer by abstract reasoning, you had another path of knowledge available: the authority of scripture or the ancients, especially Aristotle — “the Philosopher,” as he was commonly cited. Questions such as those about the nature of angels or whether the equator had a climate suitable for human habitation were addressed by referring to various philosophical arguments. And so you had the curious case of urbane and learned men believing things that illiterate craftsmen would have known to be false, for instance that garlic juice would deactivate a magnet. Sailors, being both users of magnets and lovers of garlic, must surely have known that Plutarch’s opinion on this matter — part of a grander theory of antipathy and attraction — was hogwash. But rare was the sailor who could read and write, and anyway, who are you going to believe, the great philosopher of antiquity or some unlettered sea dog?
These ways of knowing all had something in common: to know something was to be able to fit it into the world’s underlying order. Because this order was more fundamental and, in some sense, more true than any particular experience, the various categories, measurements, and models that accounted for experience tended to expand or contract to accommodate the underlying order. Numbers in scholarly works might be rounded to a nearby digit of numerological significance. Prior to the fourteenth century, the number of hours of daylight remained fixed throughout the year, the units themselves lengthening or shortening depending on the rising and setting of the sun.
In Europe before the fact, knowing felt like fitting something — an observation, a concept, a precept — into a bigger design, and the better it fit and the more it resonated symbolically, the truer it felt.
An English yeoman farmer rubs his eyes to the sound of the cock crowing. The sun’s coming up, the hour of Prime. “Gloria Patri, et Filio, et Spiritui Sancto … ” It’s cold outside, and the frost dusts the grass, but the icy film over the water trough breaks easily. The early days of Advent belie the deep winter to come after Christmas.
He trundles to the mill with a sack of grain to make some flour for the week’s bread. His wife, who is from a nearby estate, says that something about their village’s millstone mucks up the flour. He thinks she’s just avoiding a bit of extra kneading. The news on everyone’s lips is of an unknown new ague afflicting another town in the county. Already seven souls, young and old, men and women, had been struck, and two, an older widow and the village blacksmith, had departed this world. It was said that a hot poultice of linseed meal and mustard, applied to the throat, had cured two or three. He resolves to get some mustard seeds from his cellar that very afternoon.
TheAristotelian philosophyis inept for New discoveries…. And that there is anAmericaof secrets, and unknownPeruof Nature, whose discovery would richly advance [all Arts and Professions], is more than conjecture.
- Joseph Glanvill, The Vanity of Dogmatizing (1661)
Between the 1200s and the 1500s, European society underwent a revolution. During this time, what the French historian Jacques Le Goff called an “atmosphere of calculation” was beginning to suffuse the market towns, monasteries, harbors, and universities. According to Alfred Crosby’s magnificent book The Measure of Reality, the sources and effects of this transformation were numerous. The adoption of clocks and the quantification of time gave the complex Christian liturgy of prayers and calendars of saints a mechanical order. The prominence and density of merchants and burghers in free cities boosted the status of arithmetic and record-keeping, for “every saleable item is at the same time a measured item,” as one medieval philosopher put it. Even the new polyphonic music required a visual notation and a precise, metrical view of time. That a new era had begun in which knowledge might be a quantity, something to be piled up, and not a quality to be discoursed upon, can be seen by the 1500s in developments like Tycho Brahe’s tables of astronomical data and Thomas Tallis’s forty-part composition Spem in alium, soon to be followed by tables, charts, and almanacs of all kinds.
Also in the 1500s, Europeans came to understand that what their voyagers had been poking at was not an island chain or a strange bit of Asia but a new continent, a new hemisphere, really. The feeling that mankind had moved beyond the limits of ancient knowledge can be seen in the frontispiece for Bacon’s Novum Organum (1620), an etching of a galleon sailing between the Pillars of Hercules toward the New World. The New Star of 1572, which Brahe proved to be beyond the Moon and thus in the celestial sphere — previously thought to be immutable — woke up the astronomers. And the invention of movable type helped the followers of Martin Luther to replace the authority of antiquity, and of antiquity’s latter-day defenders, the Roman Papacy, with the printed text itself.
But if there were serious cracks in what Crosby calls the “venerable model” of European knowledge, what was it about the fact that seemed like the solution? The problem with medieval philosophy, according to Francis Bacon, was that philosophers, so literate in the book of God’s Word, failed to pay sufficient attention to the book of God’s works, the book of nature. And the few who bothered to read the book of nature were “like impatient horses champing at the bit,” charging off into grand theories before making careful experiments. What they should have done was to actually check systematically whether garlic deactivated magnets, or whether the bowels of a freshly killed she-goat, dung included, were a successful antivenom.
What then, exactly, is a fact? As we typically think of it today, we don’t construct, produce, generate, make, or decree facts. We establish them. But there’s a curious ambiguity deep in our history between the thing-ness of the fact and the social procedure for transforming an experience into a fact. Consider the early, pre-scientific usage of “fact” in the English common law tradition, where “the facts of the case” were decided by the jury. Whatever the accused person actually did was perhaps knowable only to God, but the facts were the deeds as established by the jury, held to be immutably true for legal purposes. The modern fact is what you get when you put experience on trial. And the fact-making deed is the social process that converts an experience into a thing that can be ported from one place to another, put on a bookshelf, neatly aggregated or re-arranged with others.
By the seventeenth century, the shift had become explicit. In place of the old model’s inconstancy, philosophers and scientists such as Francis Bacon and René Descartes emphasized methodical inquiry and the clear publication of results. Groups of likeminded experimenters founded institutions like the Royal Society in England and the Académie des Science in France, which established standards for new techniques, measuring tools, units, and the like. Facts, as far as the new science was concerned, were established not by observation or argument but by procedure.
One goal of procedure was to isolate the natural phenomenon under study from variations in local context and personal experience. Another goal was to make the phenomenon into something public. Strange as it is for us to recognize today, the fact was a social thing: what validated it was that it could be demonstrated to others, or replicated by them, according to an agreed-upon procedure. This shift is summed up in the motto of the Royal Society: Nullius in verba, “take no one’s word for it.”
The other important feature of facts is that they stand apart from theories about what they mean. The Royal Society demanded of experimental reports that “the matter of fact shall be barely stated … and entered so in the Register-book, by order of the Society.” Scientists could suggest “conjectures” as to the cause of the phenomena, but these would be entered into the register book separately. Whereas in the “venerable model,” scientists sought to fit experiences and observations into bigger theories, the new scientists kept fact and interpretive theories firmly apart. Theories and interpretations remained amorphous, hotly contested, tremulous; facts seemed like they stood alone, solid and sure.
The fact had another benefit. In a Europe thoroughly exhausted by religious and political arguments, the fact seemed to stand coolly apart from this too. Founded in the wake of the English Civil War, the Royal Society did not require full agreement on matters of religion or politics. A science conducted with regard to careful descriptions of matters of fact seemed a promising way forward.
The fact would never have conquered European science, much less the globe, if all it had going for it was a new attitude toward knowledge. Certainly it would never have become something everyday people were concerned with. David Wootton writes: “Before the Scientific Revolution facts were few and far between: they were handmade, bespoke rather than mass-produced, they were poorly distributed, they were often unreliable.” For decades after first appearing, facts yielded few definitive insights and disproved few of the reigning theories. The experimental method was often unwieldy and poorly executed, characterized by false starts and dead ends. The ultimate victory of the fact owes as much to the shifting economic and social landscape of early modern Europe as to its validity as a way of doing science.
Facts must be produced. And, just like in any industry, growing the fact industry’s efficiency, investment, and demand will increase supply while reducing cost. In this sense, the Scientific Revolution was an industrial revolution as much as a conceptual one.
For example, the discovery of the Americas and the blossoming East Indies trade led to a maritime boom — which in turn created a powerful new demand for telescopes, compasses, astrolabes, and other navigational equipment — which in turn made it cheaper for astronomers, mathematicians, and physicists to do their work, and funded a new class of skilled artisans who built not only the telescopes but a broad set of commercial and scientific equipment — all of which meant a boom in several industries of fact-production.
But more than anything, it was the movable-type printing press that made the fact possible. What distinguishes alchemy from science is not the method but the shifting social surround: from guilds trading secret handwritten manuscripts to a civil society printing public books. The fact, like the book, is a form of what the French philosopher Bruno Latour calls an “immutable mobile” — a knowledge-thing that one can move, aggregate, and build upon without changing it in substance. The fact and the book are inextricably linked. The printed word is a “mechanism … to irreversibly capture accuracy.”
It was this immutability of printed books that made them so important to the establishment of facts. Handwritten manuscripts had been inconsistent and subject to constant revision. Any kind of technical information was notoriously unreliable: Copyists had a hard enough time maintaining accuracy for texts whose meaning they understood, much less for the impenetrable ideas of experimenters. But for science to function, the details mattered. Printing allowed scientists to share technical schema, tables of data, experiment designs, and inventions far more easily, durably, and accurately, and with illustrative drawings to boot.
The fact is a universalist ideal: it seems to achieve something independent of time and place. And yet the ideal itself arose from local factors, quirks of history, and happy turns of fate. It needed a specific social, economic, and technological setting to flourish and make sense. For example, to do lab work, scientists had to be able to see — but London gets as few as eight hours of daylight in winter. That roadblock was cleared by the Argand oil lamp. Relying on the principles of Antoine Lavoisier’s oxygen experiments, and developed in the late 1700s by one of his students, it was the first major lighting invention in two thousand years. It provided the light of ten candles, the perfect instrument for working, reading, and conducting scientific experiments after dark. Replace the glass chimney with copper, put it under a beaker stand, and the lamp’s clean, steady, controllable burner was also perfect for laboratory chemistry. With the arrival of the Argand lamp, clean, constant light replaced flickering candlelight at any hour of night or day, while precise heat for chemical experiments was merely a twist of the knob away.
What took place with books and lamps happened in many other areas of life: the replacement of variable, inconstant, and local knowledge with steady, replicable, and universal knowledge. The attractiveness of the modern model of facts was that anyone, following the proper method and with readily available tools, could replicate them. Rather than arguing about different descriptions of hotness, you could follow a procedure, step by step from a book, to precisely measure what was newly called “temperature,” and anyone else who did so would get the same result. The victory of this approach over time was not so much the product of a philosophical shift as a shift of social and economic reality: Superior production techniques for laboratory equipment and materials and the spread of scientific societies — including through better postal services and translations of scientific manuscripts — improved the quality of experimentation and knowledge of scientific techniques.
The belief that The Truth is easily accessible if we all agree to some basic procedures came to be understood as a universal truth of humankind — but really what happened is that a specific strand of European civilization became globalized.
If the seventeenth century was when facts were invented, the coffeehouse was where they first took hold. The central feature of the European coffeehouse, besides java imported from the colonies, was communal tables stacked with pamphlets and newspapers providing the latest news and editorial opinion. The combination of news, fresh political tracts, and caffeine led to long conversations in which new ideas could be hashed out and the contours of consensus reached — a process that was aided by the self-selection of interested parties into different coffeehouses. In the Europe of the old venerable model, learning looked like large manuscripts, carefully stewarded over the eons, sometimes literally chained to desks, muttered aloud by monks. In the Europe of the new science, learning looked like stacks of newspapers, journals, and pamphlets, read silently by gentlemen, or boisterously for effect.
The same conditions that revolutionized the production of scientific facts also transformed the creation of all kinds of other facts. The Enlightenment of the Argand lamp was also an awakening from the slumber of superstition and authority stimulated by caffeine. Charles II felt threatened enough to ban coffeehouses in 1675, fearing “that in such Houses, and by occasion of the meetings of such persons therein, divers False, Malitious and Scandalous Reports are devised and spread abroad, to the Defamation of His Majesties Government, and to the Disturbance of the Peace and Quiet of the Realm.”
Coffeehouse culture and the “immutable mobile” of the printed word enabled a variety of ideas adjacent to the realm of the scientific fact. Is it a fact, as John Locke wrote, “that creatures belonging to the same species and rank … should also be equal one amongst another”? Or that the merchant sloop Margaret was ransacked off the coast of Anguilla? Or that the world, as The Most Reverend James Ussher calculated, was created on Sunday, October 23, 4004 b.c.?
In the Royal Society’s strict sense, no. But these kinds of claims still rested on a new mode of reasoning that prized clearly stated assumptions, attention to empirical detail, aspirations toward uniformity, and achieving the concurrence of the right sort of men. Repeatable verification processes had meshed with new means for distributing information and calculating consensus, leading simultaneously to revolutions in science, politics, and finance.
Modern political parties emerged out of this same intellectual mélange, and both Lloyd’s of London (the first modern insurance company) and the London Stock Exchange were founded at coffeehouses and at first operated there, providing new venues to rapidly discern prices, exchange information, and make deals. What the fact is to science, the vote is to politics, and the price to economics. All are cut from the same cloth: accessible public verification procedures and the fixity of the printed word.
Now, what I want is, Facts. Teach these boys and girls nothing but Facts. Facts alone are wanted in life. Plant nothing else, and root out everything else. You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.
- Charles Dickens, Hard Times (1854)
Facts were becoming cheaper to make, and could almost be industrially produced using readily available measurement tools. And because producing facts required procedures that everyone agreed on, it was essential to standardize the measurement tools, along with their units.
When the French mathematician Marin Mersenne sought to replicate Galileo’s experiment on the speed of falling bodies, he faced a puzzle: Just how long was the braccio, or arm’s-length measurement, that Galileo used to report his results, and how could Mersenne get one in Paris? Even after finding a Florentine braccio in Rome, Mersenne still couldn’t be sure, since Galileo may have used a slightly different one made in Venice.
It took until the very end of the 1700s for an earnest attempt at universal standardization to begin, with the creation of the metric system. Later, in the 1800s, a body of international scientific committees was established to oversee the promulgation of universal standards. Starting from the most obvious measurements of distance, mass, and time, they eventually settled on seven fundamental units for measuring all the physical phenomena in the universe.
Standardization was by no means limited to science. It conquered almost every field of human activity, transforming what had been observations, intuitions, practices, and norms (the pinch of salt) into universal, immutable mobiles (the teaspoon). You began to be able to trust that screws made in one factory would fit nuts made in another, that a clock in the village would tell the same time as one in the city. As facts became more standardized and cheaper, you began to be able to expect people to know certain facts about themselves, their activities, and the world. It was not until the ascendency of the fact — and related things, like paperwork — that it was common to know one’s exact birthday. Today, the International Organization for Standardization (better known by its multilingually standardized acronym, ISO) promulgates over 24,000 standards for everything from calibrating thermometers to the sizes and shapes of wine glasses.
But although the forces of standardization were making facts cheaper and easier to produce, they remained far from free. The specialized instruments necessary to produce these facts — and the trained and trusted measurers — were still scarce. Facts might be made at industrial scale, but they were still made by men, and usually “the right sort of men,” and thus directed at the most important purposes: industry, science, commerce, and government.
At the same time, the facts these institutions demanded were becoming increasingly complex, specialized, and derivative of other facts. A mining company might record not just how much coal a particular mine could produce, but also the coal’s chemical composition and its expected geological distribution. An accounting firm might calculate not just basic profits and losses, but asset depreciation and currency fluctuations. Our aspirations for an ever-more-complete account of the world kept up with our growing efficiency at producing facts.
The cost of such facts was coming down, but it remained high enough to have a number of important implications. In many cases, verification procedures were complex enough that while fellow professionals could audit a fact, the process was opaque to the layman. The expense and difficulty of generating facts, and the victories that the right facts could deliver — in business, science, or politics — thus led to new industries for generating and auditing facts.
This was the golden age of almanacs, encyclopedias, trade journals, and statistical compendia. European fiction of the 1800s is likewise replete with the character — the German scholar, the Russian anarchist, the English beancounter — who refuses to believe anything that does not come before him in the form of an established scientific fact or a mathematical proof. Novelists, in critiquing this tendency, could expect their readers to be familiar with such fact-mongers because they had proliferated in Europe by this time.
The wealth and status accruing to fact generation stimulated the development of professional norms and virtues not only for scientists but also bankers, accountants, journalists, spies, doctors, lawyers, and dozens of other newly professionalizing trades. You needed professionals you could trust, who agreed on the same procedures, kept the same kinds of complex records, and could continue to generate reliable facts. And though raw data may have been publicly available in principle, the difficulty of accessing and making sense of it left its interpretation to the experts, with laymen relying on more accessible summations. Because the facts tended to pile up in the same places, and to be piled up by professionals who shared interests and outlook, reality appeared to cohere together. In the age of the fact, knowing something felt like establishing it as true with your own two eyes, or referencing an authoritative expert who you could trust had a command of the facts.
Even in the twentieth century, with the growing power of radio and television for entertainment and sharing information, print continued to reign as the ultimate source of authority. As it had since the beginning, something about the letters sitting there in stark immutable relief bore a spiritual connection to the transcendent authority of the fact established by valid procedure. The venerable Encyclopedia Britannica was first printed in three volumes in 1771. It grew in both size and popularity, peaking in 1990 with the thirty-two-volume 15th edition, selling 117,000 sets in the United States alone, with revenues of $650 million. In 2010, it would go out of print.
A junior clerk at an insurance firm wakes to the ringing of his alarm clock. It’s a cold morning, according to the mercury, about 25 degrees. After a cup of Folgers coffee from the percolator, he commutes to work on his usual tram (#31–Downtown), picking up the morning edition of the paper along the way.
After punching in on the timeclock, he sits down at his appointed station. Amid the usual forms that the file clerk has dropped off for his daily work, he has on his desk a special request memo from his boss, asking him to produce figures on life expectancy for some actuarial table the firm requires.
He retrieves from inside his desk a copy of The World Almanac and Book of Facts (AY67.N5 W7 in the Library of Congress catalog system). After flipping through dozens of pages of advertisements, he finds the tables of mortality statistics and begins to make the required calculations. Strange how, for the common fellow, the dividing line between an answer easy to ascertain and one nearly impossible to arrive at was merely whether it was printed in The World Almanac.
Sometime in the last fifty years, the realm of facts grew so much that it became bigger than reality itself — the map became larger than the territory. We now have facts upon facts, facts about facts, facts that bring with them a detailed history of all past permutations of said fact and all possible future simulated values, rank-ordered by probability. Facts have progressed from medieval scarcity to modern abundance to contemporary superabundance.
Who could look at the 15th edition of the Encyclopedia Britannica, with 410,000 entries across thirty-two volumes, and not see abundance? At about $1,500 a set, that’s only a third of a penny per entry! What could be cheaper? For starters, Microsoft’s Encarta software, which, while not as good-looking on the shelf, cost only a fifth of a penny per entry at its final edition in 2009 — the whole bundle was priced at just $30. But cheaper still is Wikipedia, which currently has 6,635,949 English-language entries at a cost per entry of absolutely nothing. Abundance is a third of a penny per entry. Superabundance is fifteen times as many entries for free.
The concept of superabundance owes especially to the work of media theorist L. M. Sacasas. In an essay on the transformative power of digital technology published in these pages, he writes that “super-abundance … encourages the view that truth isn’t real: Whatever view you want to validate, you’ll find facts to support it. All information is also now potentially disinformation.” Sacasas continues to investigate the phenomenon in his newsletter, The Convivial Society.
What makes superabundance possible is the increasing automation of fact production. Modern fact-making processes — the scientific method, actuarial sciences, process engineering, and so forth — still depended on skilled people well into the twentieth century. Even where a computer like an IBM 360 Mainframe might be doing the tabulating, the work of both generating inputs and checking the result would often be done by hand.
But we have now largely uploaded fact production into cybernetic systems of sensors, algorithms, databases, and feeds. The agreed-upon social procedures for fact generation of almost every kind have been programmed into these sensor–computer networks, which then produce those facts on everything from weather patterns to financial trends to sports statistics programmatically, freeing up skilled workers for other tasks.
For instance, most people today are unlikely to encounter an analog thermometer outside of a school science experiment — much less build such an instrument themselves. Mostly, we think of temperature as something a computer records and provides to us on a phone, the TV, or a car dashboard. The National Weather Service and other similar institutions produce highly accurate readings from networks of tens of thousands of weather stations, which are no longer buildings with human operators but miniaturized instruments automatically transmitting their readings over radio and the Internet.
The weather app Dark Sky took the automation of data interpretation even further. Based on the user’s real-time location, the app pulled radar images of nearby clouds and extrapolated where those clouds would be in the next hour or so to provide a personalized weather forecast. Unlike a human meteorologist, Dark Sky did not attempt to “understand” weather patterns: it didn’t look at underlying factors of air temperature and pressure or humidity. But with its access to location data and local radar imagery, and with a sleek user interface, it proved a popular, if not entirely reliable, tool for the database age.
Even in the experimental sciences, where facts remain relatively costly, they are increasingly generated automatically by digital instruments and not from the procedures carried out directly by scientists. For example, when scientists were deciphering the structure of DNA in the 1950s, they needed to carefully develop X-ray photos of crystallized DNA and then ponder what these blurry images might tell them about the shape and function of the molecule. But by the time of the human genome project in the 1990s, scientists were digitally assembling short pieces of DNA into a complete sequence for each chromosome, relying on automated computer algorithms to puzzle out how it all fit together.
In 1986, Bruno Latour still recorded that almost every scientific instrument of whatever size and complexity resulted in some sign or graph being inscribed on paper. No longer.
Few things feel more immutable or fixed than a ball of cold, solid steel. But if you have a million of them, a strange thing happens: they will behave like a fluid, sloshing this way and that, sliding underfoot, unpredictable. In the same way and for the same reason, having a small number of facts feels like certainty and understanding; having a million feels like uncertainty and befuddlement. The facts don’t lie, but data sure does.
The shift from printed facts (abundance) to computerized data (superabundance) has completely upended the economics of knowledge. As the supply of facts approaches infinity, information becomes too cheap to value.
What facts are you willing to pay for today? When was the last time you bought an almanac or a reference work?
And what is it that those at the cutting edge of data — climate scientists, particle physicists, hedge fund managers, analysts — sell? It isn’t the facts themselves. It isn’t really the data itself, the full collection of superabundant facts, though they might sell access to the whole as a commodity. Where these professionals really generate value today is in selling narratives built from the data, interpretations and theories bolstered by evidence, especially those with predictive value about the future. The generation of facts in accordance with professional standards has increasingly been automated and outsourced; it is, in any case, no longer a matter of elite concern. Status and wealth now accrue to those “symbolic analysts” who can summon the most powerful or profitable narratives from the database.
L.M. Sacasas explains the transition from the dominance of the consensus narrative to that of the database in his essay “Narrative Collapse.” When facts were scarce, scientists, journalists, and other professionals fought over the single dominant interpretation. When both generating new facts and recalling them were relatively costly, this dominant narrative held a powerful sway. Fact-generating professionals piled up facts in support of a limited number of stories about the underlying reality. A unified sense of the world was a byproduct of the scarcity of resources for generating new knowledge and telling different stories.
But in a world of superabundant, readily recalled facts, generating the umpteenth fact rarely gets you much. More valuable is skill in rapidly re-aligning facts and assimilating new information into ever-changing stories. Professionals create value by generating, defending, and extending compelling pathways through the database of facts: media narratives, scientific theories, financial predictions, tax law interpretations, and so forth. The collapse of any particular narrative due to new information only marginally reshapes the database of all possible narratives.
Consider this change at the level of the high school research paper. In 1990, a student tasked with writing a research paper would sidle up to the school librarian’s desk or open the card catalog to figure out what sources were available on her chosen subject. An especially enterprising and dedicated student, dissatisfied with the offerings available in the school’s library, might decamp to her city’s library or maybe even a local university’s. For most purposes and for the vast majority of people, the universe of the knowable was whatever could be found on the shelves of the local library.
The student would have chosen her subject, and maybe she would already have definite opinions about the factors endangering the giant panda or the problem of child labor in Indonesian shoe factories. But the evidence she could marshal would be greatly constrained by what was available to her. Because the most mainstream and generally accepted sources were vastly more likely to be represented at her library, the evidence she had at hand would most likely reflect expert consensus and mainstream beliefs about the subject. It wasn’t a conspiracy that alternative sources, maverick thinkers, or outsider researchers were nowhere to be found, but it also wasn’t a coincidence. Just by shooting for a solid B+, the student would probably produce a paper with more-or-less reputable sources, reflective of the mainstream debate about the subject.
Today, that same student would take a vastly different approach. For starters, it wouldn’t begin in the library. On her computer or on her phone, the student would google (or search TikTok) and browse Wikipedia for a preliminary sketch of the subject. Intent to use only the best sources, she would stuff her paper with argument-supporting citations found via Google Scholar, Web of Science, LexisNexis, JSTOR, HathiTrust, Archive.org, and other databases that put all of human knowledge at her fingertips.
Thirty years ago, writing a good research paper meant assembling a cohesive argument from the available facts. Today, it means assembling the facts from a substantially larger pool to fit the desired narrative. It isn’t just indifferent students who do this, of course. Anyone who works with data knows the temptation, or sometimes the necessity, of squeezing every bit of good news out of an Excel spreadsheet, web traffic report, or regression analysis.
After scrolling through Twitter, you get back to your morning’s work: preparing a PowerPoint presentation on the growth of a new product for your boss. You play around with OpenAI’s DALL·E 2 to create the perfect clip art for the project and start crunching the numbers. You don’t need to find them yourself — you have an analytics dashboard that some data scientist put together. Your job is to create the most compelling visualization of steady growth. There has been growth, but it hasn’t really been all that steady or dramatic. But if you choose the right parameters from a menu of thousands — the right metric, the right timeframe, the right kind of graph — you can present the data in the best possible light. On to the next slide.
You decide to go out for lunch and try someplace new. You pull up Yelp and scroll through the dozens of restaurant options within a short walking distance. You aimlessly check out a classic diner, a tapas restaurant, and a new Thai place. It’s just lunch, but it also feels like you shouldn’t miss out on the chance to make the most of it. You sort of feel like a burger, but the diner doesn’t make it on any “best of” burger lists (you quickly checked). Do you pick the good-but-not-special burger from the diner or the Michelin Guide–recommended Thai place that you don’t really feel like? Whatever you choose, you know there will be a pang of FOMO: Fear Of Missing Out, the price you pay for having a world of lunch-related data at your fingertips but only stomach enough for one.
Being limited to a smaller set of facts used to also require something else: trust in the institutions and experts that credentialed the facts. This trust rested on the belief that they were faithful stewards on behalf of the public, carrying out those social verification procedures so that you did not need to.
If you refused to trust the authority and judgment of the New Yorker, the New England Journal of Medicine, or Harvard University Press, then you weren’t going to be able to write your paper. And whatever ideas or thinkers did not meet their editorial judgments or fact-checking standards would simply never appear in your field of view. There’s a reason why anti-institutional conspiratorializing — rejecting the government, the media, The Man — was a kind of vague vibes-based paranoia. You could reject those authorities, but the tradeoff was that this cast your arguments adrift on the vast sea of feeling, intuition, personal experience, and hearsay.
And so, the automatic digital production of superabundant data also led to the apparent liberation of facts from the authorities that had previously generated and verified them. Produced automatically by computers, the data seem to stand apart from the messy social process that once gave them authority. Institutions, expertise, the scientific process, trust, authority, verification — all sink into the invisible background, and the facts seem readily available for application in diverging realities. The computer will keep giving you the data, and the data will keep seeming true and useful, even if you have no understanding of or faith in the underlying theory. As rain falls on both the just and the unjust, so does an iPhone’s GPS navigate equally well for NASA physicists and for Flat Earthers.
Holding data apart from their institutional context, you can manipulate them in any way you want. If the fact is what you get when you put experience on trial, data is what lets you stand in for the jury: re-examining the evidence, re-playing the tape, reading the transcripts, coming to your own conclusions. No longer does accessing the facts require navigating an institution — a court clerk, a prosecutor, a police department — that might put them in context: you can find it all on the Internet. Part of the explosion in interest in the True Crime genre is the fact that superfans can easily play along at home.
At the same time, we have become positively suspicious anytime an institution asks us to rely on its old-fashioned authority or its adherence to the proper verification procedure. It seems paltry and readily manipulated, compared to the independent testimony of an automated recording. As body cameras have proliferated, we’ve become suspicious — perhaps justifiably — of police testimony whenever footage is absent or unreleased. We take a friend’s report on a new restaurant seriously, but we still check Yelp. We have taken nullius in verba — “take no one’s word for it” — to unimaginable levels: We no longer even trust our own senses and memories if we can’t back them up with computer recall.
What is data good for? It isn’t just an arms race of facts — “let me just check one more review.” The fact gives you a knowledge-thing that you can take to the bank. But it can’t go any further than that. If you have enough facts, you can try to build and test a theory to explain and understand the world, and hopefully also make some useful predictions. But if you have a metric ton of facts, and a computer powerful enough to sort through them, you can skip explaining the world with theories and go straight to predictions. Statistical techniques can find patterns and make predictions right off the data — no need for humans to come up with hypotheses to test or theories to understand. If facts unlock the secrets of nature, data unlocks the future.
Instead of laboriously constructing theories out of hard-won facts, analysts with access to vast troves of data can now conduct simulations of all possible futures. Tinker with a few variables and you can figure out which outcomes are the most likely under a variety of circumstances. Given enough data, you can model anything.
But this power — to explore not just reality as it is, but all the realities that might be — has brought about a new danger. If the temptation of the age of facts was to believe that the only things one could know were those that procedural reason or science validated, the temptation of the age of data is to believe that any coherent narrative path that can be charted through the data has a claim to truth, that alternative facts permit alternate realities.
Decades before the superabundance of facts swept over politics and media, it began to revolutionize finance in the 1980s. Deregulation of markets and the adoption of information technology led to a glut of data for financial institutions to sift through and find the most profitable narratives. Computers could handle making trades, evaluating loans, or collating financial data; the value of skill in these social fact-generating procedures plummeted, while the value of profit-maximizing, creative, and complex strategies built upon these automated systems skyrocketed. Tell the right stories of discounted future cash flows and risk levels and sell these stories to markets, investors, and regulators, and you could turn the straw of accounting and market data into gold. In the world of data, you let the computers write the balance sheet, while you figure out how to structure millions of transactions between corporate entities so they tell a story of laptops designed in California, assembled in Taiwan, and sold in Canada that somehow generated all their profits in low-tax Ireland. One result of this transformation has been the steady destruction of professional norms of fidelity for corporate lawyers, investment bankers, tax accountants, and credit-rating agencies.
Consider hedge fund manager Michael Burry’s process for setting up what has come to be known as “The Big Short” of the American housing market in the mid-2000s, the title of a famous book and film about the housing market crash. As Burry became interested in subprime mortgages, he began reading prospectuses for a common financial instrument in the industry: collateralized debt obligations, which in this case were bundles of mortgages sold as investments. Supposedly, these CDOs would sort, rank, and splice together thousands of mortgages to create a financial instrument that was less risky than the underlying loans. Despite a market for these securities worth trillions of dollars, Burry was almost certainly the first human being (other than the lawyers who wrote them) to actually read and contemplate the documents that explained the alchemy by which these financial instruments transmuted high-risk mortgages into stable, reliable investments. Every person involved in producing a particular CDO was outsourcing critical inquiry to algorithms, from the loan officers writing particular mortgages, to the ratings-agency analyst scoring the overall CDO, to the bankers buying the completed mortgage-backed security.
At every stage, the humans involved outsourced a question of reality to automated, computerized procedure. The temptation of the age of data, again, is to believe that any coherent narrative path that can be charted through the data has a claim to truth. Burry made billions of dollars by finding a story that everyone believed, where all the procedures had been followed, all the facts piled up, that seemed real — but wasn’t.
But years before the Global Financial Crisis of 2007–2008, another scandal illustrated in almost every detail the power of data and the allure of simulated realities. The scandal took down a company declared by Fortune Magazine “The Most Innovative Company in the World” for six years running, until it collapsed in 2001 in one of the biggest bankruptcies in history: Enron.
The Enron story is often remembered as simple fraud: the company made up fake revenue and hid its losses. The real story is more baroque — and more interesting. For one thing, Enron really was an innovative corporation, which is why much of its revenue was big and real. Its innovation was that it figured out how to transform natural gas (and later electricity and broadband) from a sleepy business driven by high capital costs and complicated arrangements of buyers and sellers to liquid markets where buyers and sellers could easily tailor transactions to their needs — and where Enron, acting as the middleman, booked major profits.
Think of a contract as a kind of story made into a fact. It has characters, action, and a setting. “In one month, Jim promises to buy five apples from Bob for a dollar each.” In the world before data, natural gas contracts could tell only a very limited number of stories. Compared to other fuels, you can’t readily store natural gas, and it’s difficult to even leave in the ground once you’ve started building a well. Natural gas is also hard to move around (it’s a flammable gas, after all), and so how much gas would be available where and when were difficult questions to answer. The physical limitations of the market meant that the two main kinds of stories natural gas contracts could tell were either long or short ones: “Bob will buy 10,000 cubic feet of gas every month for two years at $7.75 per thousand cubic feet,” or “Bob will buy 10,000 cubic feet of gas right now at whatever price the market is demanding.”
The underlying problem was that to base these stories on solid facts required answering a lot of complicated questions: Who will produce the gas? Does the producer have sufficient reserves available to meet demand? Who will process the gas, and does the processor have the capacity to do so when the consumer needs it? Who will transport it? Is there enough pipeline capacity to bring gas from producers to processers to consumers? And just who are the consumers, and how much will they need today and in the future? Will consumers reduce demand in the future and switch to buying more oil instead if oil drops in price? Will demand drop if the weather gets warmer than expected?
Think about the kind of fact the market price of gas was. It was a tidy, portable, piece of information containing within itself the needs and demands of gas producers, consumers, and transporters. A social, public process of price discovery tells us that today a cubic foot of gas delivered to your home costs this much. But the certainty and simplicity of the gas fact had a built-in vulnerability: Nobody really knew what the price would be tomorrow, or next month. You cannot, after all, have facts about the future.
Or can you? Commodities traders had long ago invented the “futures contract,” which guaranteed purchase of a commodity at a certain price in the future. This worked great for commodities that were easily stored, transported, or substituted. But natural gas was tricky — storage capacity was limited, so it had to flow in the right amount in the right pipeline, from an active producer. How could you make it behave like grain or oil or another commodity?
Two men at Enron managed to figure it out. Jeffrey Skilling and Rich Kinder both believed in the power of data to break through the chains of the stubborn fact.
Jeffrey Skilling was a brilliant McKinsey partner educated at Harvard Business School. He had been consulting for Enron and thought he had come up with a unique solution to the industry’s problems: the Gas Bank. The vision for the Gas Bank was to use Enron’s position in the pipeline market — and the data it provided on the needs of different kinds of buyers and sellers — to treat gas as a commodity.
Throughout the 1980s, Enron had invested in information technology that allowed it to monitor and control flows across its pipelines and coordinate information across the business. The Gas Bank would allow them to put this data to work. At the level of physical stuff, little would change — gas would flow from producers and storage to customers. But within Enron’s ledgers, it would look completely different. Rather than simply charging a fee for transporting gas from producers to sellers, Enron itself would actually buy gas from producers (who would act like depositors) before selling it to customers (like a bank’s borrowers), making a profit on the difference. Whether it was the customers or the producers who wanted to enter into contracts, and whether long-term or short-term, didn’t matter — as long as Enron’s portfolio was balanced between depositors and creditors, it would all work out.
The old hands at Enron didn’t buy the idea. Their heads full of gas facts, they thought Skilling’s vision was a pipe dream, a typical flight of consultant fancy by someone who didn’t understand the hard constraints they faced. But one person who understood the natural gas industry and Enron’s operations better than anyone else was on board: COO Rich Kinder.
Kinder understood that the problem with the natural gas industry was not ultimately a pipeline problem or an investment problem but a risk problem, which is an information problem. Kinder saw that the Gas Bank would let Enron write contracts that told far more complicated kinds of stories, using computer systems to validate the requisite facts on the fly. For example, a utility might want to lock in a low price for next winter’s gas. This would be a risky bet for Enron, unless the company already knew it had enough unsold gas at a lower price from another contract, in which case it could make a tidy profit selling the future guarantee at no risk to itself.
None of this was illusory. Enron really did solve this thorny information problem, using its combination of market position, size, and data access to create a much more robust natural gas market that was better for everyone. And while the first breakthrough in the Gas Bank — getting gas producers to sign long-term development financing contracts — was just a clever business strategy, it was Enron’s investments in information technology that allowed it to track and manage increasingly complex natural gas contracts and to act as the market maker for the whole industry. Before, the obstinate physicality of natural gas had made it organizationally impossible to write sophisticated contracts for different market actors, as for every contract, reserve, production, cleaning, and pipeline capacity and more had to be recalculated. Computer automation served to ever more free the traders from worrying about these details, making a once expensive cognitive task very cheap. As Bethany McLean and Peter Elkind described it in their book The Smartest Guys in the Room, “Skilling’s innovation had the effect of freeing natural gas from physical qualities, from the constraints of molecules and movement.”
But once Enron transformed the gas market, it faced a choice: was it a gas company with a successful finance and trading arm, or was it a financial company that happened to own pipelines? There was a reason why only Enron could create the Gas Bank — the Wall Street firms that were interested in financializing the natural gas industry did not have the market information or understanding of the nuances of the business to make it work. But once Enron had solved this problem, its profits from trading and hedging on natural gas contracts quickly outstripped its profits from building and running the pipelines and other physical infrastructure through which actual natural gas was delivered.
Kinder and Skilling came to stand for two different sides of this problem. Both men relied on the optimization and risk management made possible only with computerization and data, the new kinds of stories you could tell. But for Kinder, all of this activity was in the service of producing a better gas industry. Skilling had grander ambitions. He saw the actual building and running of pipelines as a bad business to be in. The Gas Bank meant that Enron was moving up the value chain. Enron, Skilling thought, should farm out the low-margin work of running pipelines and instead concentrate on the much more profitable and dynamic financial business. If Enron controlled the underlying contract, it didn’t matter whether it used its own pipelines to transport gas or a competitor’s.
In Jeffrey Skilling’s vision, Enron would move toward an asset-light model, always on the hunt for “value.” It would build out its trading desk, expand the market for financial instruments for gas, pioneer new markets for trading electricity and broadband as commodities, and master the art of project financing for infrastructure development. And it would pay for all of this activity by selling off, wherever it made sense, the underlying physical assets in extracting, moving, and using molecules, whether gas or otherwise.
But the same empire-building logic could be applied within Enron itself. If profit and value emerged not from underlying reality but from information, risk assessment, and financial instruments, well then Enron itself could be commoditized, financialized, derivatized. Under Skilling, first as COO and then CEO, Enron’s primary business was not the production or delivery of energy. Enron was, instead, the world leader in corporate accounting, harnessing its command of data on prices, assets, and revenues — along with loopholes in methods for bookkeeping, regulation, and finance — to paint an appealing picture for investors and regulators about the state of its finances.
Skilling believed that the fundamental value of Enron was the value of its stock, and the value of its stock was the story about the future that the market believed in. Even if the actual gas or oil or electricity that Enron brokered contracts for did not change hands for months or years (or ever!), the profits existed in the real world today, the shadow of the future asserting itself in the present. All the stuff the business did, it did to create the facts needed to tell the best story, to buttress the most profitable future. As long as Wall Street kept buying the story, and as long as Enron could manipulate real-world activity in such a way as to generate the data it needed to maintain its narrative, the value of Enron would rise. Pursuing profit and not being held back by any allegiance to the “real world,” Skilling sought to steer Enron, and the world, into an alternate reality of pure financialization. But gravity re-asserted itself and the company imploded. It went bankrupt in 2001, and Skilling was indicted on multiple counts of fraud and fined millions of dollars.
Rich Kinder had taken another path. Leaving Enron in 1997, he formed a new partnership, Kinder Morgan, arranging with Enron to purchase part of its humdrum old pipeline business. He sought value in the physical world of atoms and molecules, after it had been transformed by data. The new financialized natural gas markets generated nuanced data on where market demand outstripped pipeline capacity. Kinder’s new company used this data to return to the business of using pipelines to deliver gas where it was needed, but now more efficiently than ever.
Kinder Morgan today uses all sorts of advanced technology in its pipeline management and energy services, in a partnership with data analysis pioneer Palantir to integrate data points across its network, but its financial operations are transparent and grounded in the realities of the energy sector. Kinder Morgan relies on the wizardry of data to produce efficiencies and manage a sprawling pipeline system many times larger than what was possible in the age of analog systems, but not to spin up alluring alternative futures. The contrast to Skilling’s Enron was not subtle in the title of a 2003 Kinder Morgan report: “Same Old Boring Stuff: Real Assets, Real Earning, Real Cash.”
At the start of Kinder’s new pipeline company in 1997, few would have guessed that within a decade Kinder Morgan would be worth more than Enron — or that Skilling would end up going to prison while Kinder would become a multibillionaire.
The story of Enron is not only about hubris and accounting fraud, though it is surely about that. It is about whether the superabundance of facts and the control promised by data-driven prediction enables us finally to escape the impositions of material reality. Are you using data to affect some change in the real world, or are you shuffling around piles of facts in support of your preferred stories about the future? Are you more loyal to the world you can hold in your hands or to the beautiful vision on the screen?
What do facts feel like today? The word that was once fixed has become slippery. We are distrustful of the facts because we know there are always more in the database. If we’re not careful, the facts get swapped out under our noses: stealth edits in the historical record, retracted papers vanishing into the ether, new studies disproving the old. The networked computer degrades the fact’s status as an “immutable mobile.”
The difference between honest, data-supported materiality and deceitful, data-backed alternate realities — between Rich Kinder and Jeffrey Skilling — is not a matter of who has the facts. The Skillings of the world will always be able to pull up the requisite facts in the database, right until the end. The difference is instead a matter of who seeks to find the truest path through the data, not the path that is the most persuasive but the one that is the most responsive to reality.
This means that we too have a choice to make. The fact had abolished trust in authority (“take no one’s word for it!”), but the age of the database returns trust in a higher authority to center stage. You’re going to have to trust someone, and you can’t make your way simply by listening to those who claim the power of facts, because everyone does that now.
The temptation will be to listen to the people — the pundits, the politicians, the entrepreneurs — who weave the most appealing story. They may have facts on their side, and the story will be powerful, inspiring, engaging, and profitable. But if it bears no allegiance to reality, at some point the music will stop, the fantasy will burst, and the piper will be paid.
But there will be others who neither adhere to the black-and-white simplicity of the familiar stories nor appeal to our craving for colorful spectacle. Their stories may have flecks of gray and unexpected colors bisecting the familiar battle lines. They may not be telling you all that you want to hear, because the truth is sometimes more boring and sometimes more complicated than we like to imagine. These are the ones who maintain an allegiance to something beyond the narrative sandcastle of superabundant facts. They may be worth following. ♣
There have been two dominant narratives about the rise of misinformation and conspiracy theories in American public life.
What we can, without prejudice, call the establishment narrative — put forward by dominant foundations, government agencies, NGOs, the mainstream press, the RAND corporation — holds that the misinformation age was launched by the Internet boom, the loss of media gatekeepers, new alternative sources of sensational information that cater to niche audiences, and social media. According to this story, the Internet in general and social media in particular reward telling audiences what they want to hear and undermining faith in existing institutions. A range of nefarious actors, from unscrupulous partisan media to foreign intelligence agencies, all benefit from algorithms that are designed to boost engagement, which winds up catering misinformation to specific audience demands. Traditional journalism, bound by ethics, has not been able to keep up.
The alternative narrative — put forward by Fox News, the populist fringes of the Left and the Right, Substackers of all sorts — holds almost the inverse. For decades, mainstream political discourse in America has been controlled by the chummy relationship between media, political, and economic elites. These actors, caught up in trading information, access, and influence with each other, fed the American people a thoroughly sanitized and limited picture of the world. But now, their dominance is being broken by the Internet, and all of the dirty laundry is being aired. In this view, “misinformation” and “conspiracy theory” are simply the establishment’s slanders for inconvenient truths it can no longer suppress. Whether it’s the Biden administration establishing a Disinformation Governance Board within the Department of Homeland Security or the New York Times’s Kevin Roose calling for a federal “reality czar,” the establishment is desperate to put the Humpty Dumpty of controlled consensus reality back together again.
As opposed as they seem, in fact both of these narratives are right, so far as they go. But neither of them captures the underlying truth. There is indeed a dark matter ripping the country apart, shredding our shared sense of reality and faith in our democratic government. But this dark matter is not misinformation, it isn’t conspiracy theories, and it isn’t the establishment, exactly. It is secrecy.
In discussions of online misinformation, one inevitably comes across some version of a ubiquitous quote by the late Senator Daniel Patrick Moynihan: “Everyone is entitled to his own opinion, but not to his own facts.” This is often mustered at the climax of some defense of “journalism” or “science” against “fake news.” But the bon mot is always tossed out without any interest in Moynihan himself. Defenders of the social importance, and the ongoing possibility, of a fact-based democratic culture would do well to consider how the quote holds up against one of the major preoccupations of this great legislator and intellectual.
The Moynihan quote captures an important dichotomy between facts and opinions, one that has been blurred by the rise of alternative media, the explosion of “news-commentary” in newspapers and on television, and an influencer-centric media economy. But at the same time, hanging like the sword of Damocles over our shared sense of reality, is an invisible and unspoken third category, one on which Moynihan became increasingly fixated.
Born in 1927, Moynihan had government roles for most of his career, serving in the Navy, on the staff of New York Governor Averell Harriman, in the administrations of John F. Kennedy and Richard Nixon, as U.S. ambassador to the United Nations and then to India, and finally a quarter-century as a U.S. Senator from New York. Late in his political career, in the 1990s, Moynihan became deeply concerned about government secrecy. Beyond particular worries about the legal and practical consequences of an explosion of classified documents, Moynihan believed that expansive secrecy was deleterious to our form of government. The 1997 Moynihan Secrecy Commission Report warned:
The failure to ensure timely access to government information, subject to carefully delineated exceptions, risks leaving the public uninformed of decisions of great consequence. As a result, there may be a heightened degree of cynicism and distrust of government, including in contexts far removed from the area in which the secrecy was maintained.
Writing before 9/11, Moynihan was especially worried that the rise of terrorism would feed the secrecy system that had grown like a cancer during the Cold War. “Secrecy responds first of all to the fear of conspiracy…. The United States will be best served by the largest possible degree of openness as to the nature of the threats we face. To do otherwise is to invite preoccupation with passing conspiracy.”
To see how secrecy upends the distinction between fact and opinion, consider the open letter that fifty-one mostly-former senior intelligence officials signed two weeks before the 2020 presidential election. The letter stipulated that the New York Post’s reporting on damaging information from Hunter Biden’s laptop bore “all the classic earmarks of a Russian information operation.” Where did this letter fall on the opinion–fact spectrum? The signers explicitly disclaimed that they had any novel facts to offer. At the same time, they implied that their authority on this matter stretched beyond mere opinion, even experienced and informed opinion. The trust that media organizations and social media platforms put in this letter — now known to have been organized by President Biden’s campaign behind the scenes — owed much to the expectation that people who signed the letter did so on the basis of an in-depth understanding of Russia’s information operations, which they could not fully share because it was secret.
Facts are statements that we accept as true because they are established by public processes that can be checked — think of the methods of science or traditional journalism. But then the opposite of a fact is not an opinion but a secret: a statement about reality that cannot, and in some cases must not, be verifiable through a public process. We can know a secret only if we’ve been initiated into an intentionally hidden social world. And if a secret’s hidden world remains inaccessible, then whether one accepts a secret as truthful depends entirely on the legitimacy of the secret-tellers, on how much we trust them.
But little can be kept secret forever. And when secrets become public, they burst into the visible realm of facts, in a display that can be profoundly unsettling.
This essay series has been arguing that one reason the American psyche is no longer in the grip of a single consensus narrative is that no single story can contain the massive number of facts that are generated and shared by automatic digital systems today. In this deluge, secrets have a special place. Trust in institutions can be drowned when corruption, incompetence, and malfeasance that were once hidden out of view inevitably become known. The most direct cause of the massive growth in American conspiracy thinking is the massive growth of bureaucratic secrecy since the start of the Cold War, together with the increasing impossibility of keeping secrets in a digital age.
It is hard to remember how submerged from public view the national security state was before 9/11. The intelligence community that came into being after World War II worked hard throughout the Cold War to make it that way. The CIA fought to stay out of the headlines (though not always successfully), and the missions of numerous national security organizations, sometimes even their very existence, were classified. The system of classifying documents as “top secret,” “secret,” or “confidential” was a midcentury innovation in American political life, and the population of people with security clearance hadn’t yet mushroomed to the late–Cold War size of over four million. Leaks were relatively rare, and often dealt with political and diplomatic matters rather than secret intelligence or military action, though there were notable exceptions. The New York Times’s lawyers cautioned it against publishing the Pentagon Papers in 1971 for good reason: President Nixon’s Department of Justice threatened a criminal prosecution under the Espionage Act.
Americans have always been paranoid, as Jesse Walker demonstrates in The United States of Paranoia — about the papal octopus, puppeteering Freemasons, mesmerized Mormons, infiltrating anarchists, clandestine Bolsheviks, brainwashed POWs, homegrown Muslim terrorists, and more. But this “paranoid style” — not limited to Left and Right fringes, as Richard Hofstadter suggested, but universal in American culture — was the product of the social realities of American political life, of messy pluralism meeting an open society.
At their best, America’s free institutions act as a counterweight to this paranoid tendency in American life. From the founding of the republic, most American leaders had seen secrecy as antithetical to the spirit of liberty, commerce, and debate that suffused our political institutions. Espionage occurred primarily through ad hoc and personal networks, and there were no secret institutions outside of wartime exigencies, after which they were promptly closed down.
In the new book The Declassification Engine, a thoroughly alarming history of American secrecy, the historian Matthew Connelly demonstrates how geopolitical circumstances together with technological developments overthrew these democratic ideals. Two technologies in particular — nuclear weapons and electromechanical cryptography — demanded permanent systems of informational control if the United States was not simply going to give them up (an absurd suggestion). More so even than the CIA’s covert action programs, it was these technologies and the bureaucracies that sprung up around them that formed the kernel of the secrecy state.
As the national security state grew, so did the number of activities that fell under the umbrella of secrecy. Expanding from “born secret” nuclear information and cryptography, the classification system grew to include aerial surveillance and then satellite photography, signals interception stations, covert operations, propaganda and political meddling, weapons programs, sensitive scientific information, and military special operations, not to mention analysis and senior discussions of foreign affairs. And, of course, there were secrets about secrets — background investigations, counterintelligence probes, encryption protocols, and black budgets. Meanwhile, concerns about foreign infiltration justified surveillance and infiltration of domestic political groups, especially in the New Left and civil rights movement, by not just the FBI but also defense intelligence and even the CIA (in violation of its charter). In many cases, even the existence of whole organizations was classified, as with the National Security Agency and the National Reconnaissance Office.
All of this sprang up from essentially nothing prior to the Second World War to constitute a vast and growing secret world by the first decade of the Cold War. There were not enough secure offices for those working in it and not enough filing cabinets to contain all the new secrets, even as hundreds of tons of classified documents were securely destroyed each year. All of this was hidden from the public eye like water behind a dam.
But the Watergate scandal, among other revelations of the early 1970s, finally broke the dam of secrecy and caused Congress to re-assert itself in an oversight role. In 1975, the Senate established the Church Committee to investigate secret abuses by the CIA, the NSA, and the FBI. The dirty laundry of the early Cold War began to come to light, and revelations emerged: of the CIA drugging American civilians (MK Ultra), withholding critical information from the Warren Commission investigating the JFK assassination, and plotting assassinations (Family Jewels); of the FBI harassing and spying on American activists (COINTELPRO); of the NSA wiretapping American citizens; of the military secretly and illegally bombing Cambodia; and similar abuses. Congress created new permanent select committees to actively oversee intelligence programs (inadvertently widening the circle of political actors with access) and passed an expanded Freedom of Information Act in 1974 over President Ford’s veto, all in the hopes of making the secret world of intelligence more democratically accountable.
In some ways, the Freedom of Information Act backfired. By creating a proactive public right to almost any kind of unclassified information, FOIA meant that the only way to conceal politically or bureaucratically sensitive information was to classify it. An estimated 50 million documents are classified each year. Moreover, as intelligence agencies have adopted the same kinds of digital systems everyone else has, the number of classified digital documents has exploded. Barack Obama’s presidential library alone contains an estimated 250 terabytes of digital data, three times the amount of George W. Bush’s — including an enormous number of classified documents that will require official review. The Public Interest Declassification Board, established by Congress to assure public access to unclassified information, worries that the current procedures will not be able to manage the deluge in the years to come.
For documents that did not meet classification criteria, federal bureaucracies decades ago invented the designation “For Official Use Only,” seeking to restrict public access to certain unclassified documents. To rein in this practice, the Obama administration came up with a more stringent “Controlled Unclassified Information” category. It took the Defense Department over a decade formally to comply, though in practice it never has, as the new label is little more than a replacement of the old one.
In the 1990s, the rise of Mandatory Declassification Review — a process by which one may request that any document be reviewed for possible declassification — along with the end of the Cold War, brought American national security institutions out of the shadows. Public pressure had been building for two decades. The earth-shattering revelations of failures and abuses beginning in the 1970s were followed by the declassification of closely guarded secrets. The late Carter administration first acknowledged the existence of stealth aircraft and spy satellites and oversaw the creation of secretive new military special operations units, which became increasingly prominent throughout the 1980s. And though the history of World War II code-breaking was revealed in a 1974 military memoir, F. W. Winterbotham’s The Ultra Secret, many of the details remained obscurely referenced in histories of computing and intelligence until the 1990s, amid growing public interest in the war’s history. In 1995, the CIA for the first time declassified pictures from the Corona satellite program used to spy on the Soviet Union, the same year that the NSA declassified the Venona program of top-secret cracked Soviet cables about Communist spies in the United States. Midcentury intelligence programs that were still nearly invisible in 1970 appeared, by the end of the century, to be massive in scope and historical influence.
The reorganization of the American national security state for the post–Cold War world began well before 9/11. The military and intelligence communities spun up new counterterrorism and special operations programs in the late Cold War as a response to the threat of terrorism, the proliferation of weapons of mass destruction, and the increasing willingness of state actors to use terrorist proxies rather than confront American power directly. Even as the demise of the USSR reduced the demand for traditional state-on-state espionage, it created a world of civil wars, powerful organized crime, rogue states, and other non-traditional threats. Unlike the “spy vs. spy” Cold War, in the era of growing terrorism it was difficult to understand or prioritize all the possible threats, and both liberal and neo-conservative American defense intellectuals began to consider what significantly expanding American presence abroad would look like. In 2000, CIA Predator drones, which had initially been used in the Balkans, began routinely overflying Afghanistan to keep tabs on the shadowy jihadist group Al Qaeda. Eventually, President Bush’s national security deputies met to finalize a “broad covert action program” that would have included the first-ever use of an armed drone for an assassination. That meeting took place on September 10, 2001, and the target of the planned assassination was Osama bin Laden.
September 11 unleashed a tidal wave of war spending. The last two decades of America’s War on Terror cost $8 trillion, including over a trillion on Homeland Security counterterrorism spending alone. Theories about global financial flows, narcoterrorism, and immigration meant that almost every element of federal law enforcement and intelligence grew larger and more secretive in the wake of 9/11. The number of Americans with security clearances ballooned from two million in early 2001 to almost five million within a decade, and the massive amounts and disparate kinds of data authorized for collection demanded new investments in automation, machine learning, and cloud computing.
As a drip-drip-drip of shocking revelations on a decades-long time lag inflict a Chinese water torture on our political psyche, we have witnessed a slow but steady rise in paranoia and conspiracy thinking in American public life. All of this is the predictable outcome of a national security state of immense scale operating amid a political system like ours, which leads to a clash between two essential principles — bureaucratic self-protection and democratic openness.
Senator Moynihan was clear-eyed that in a hypertrophied secrecy state, leaking would be a routine part of the system, the result of internal struggle among government actors whose aims would inevitably come into conflict. And yet even though we should expect it, it still undermines public trust:
The leaking of secrets has important consequences for the quality of information made available to the public, as well as for the ability to verify the information. Leaking creates a double standard that may, at times, pit political and career government officials against one another…. Leaks undermine the credibility of classification policies and other restrictions on access to information, making it harder to differentiate between secrecy that is needed to protect highly sensitive national security information and that which is not well-founded.
It is not just the public and the government who are caught up in the dialectic of secrets and leaks but journalists too, as they become complicit in the manipulation of the public square. Writes Jacob Siegel, “the more reliant the press has become on secrets and leaks, the more sycophantish and naive its attitude toward the security agencies.” And the more that changing business pressures on journalism squeeze out budgets for old-fashioned shoe-leather reporting, the worse this parasitical reliance gets. Official mistrust of the public winds up undermining public trust in establishment media.
Since the Manhattan Project, senior American leaders have realized that official secrecy creates a form of narrative control afforded nowhere else in our democratic system. In the immediate aftermath of the bombings of Hiroshima and Nagasaki, the only official information available to the public about the terrifying and extremely classified weapons was whatever General Leslie Groves decided to make available, which he did with a handy press packet. Since then, American presidents have found it convenient to disclose details of foreign policy successes, courageous and successful special operations, or powerful new weapons systems, especially in election years.
We might at least say that these kinds of leaks are often made in the service of leaders elected by the American people, and pursuing agendas on our behalf. But just as often they are not. After the war, the Air Force, dissatisfied with its budget as President Eisenhower tried to rein in military spending, leaked highly misleading estimates of the “bomber gap” the United States faced. Eisenhower was apoplectic that leaders of the military branches conducted bureaucratic warfare by headline leaks, and threatened his top generals with an FBI investigation. Defense reformers like Lee Aspin, secretary of defense to President Clinton, and Donald Rumsfeld, secretary to George W. Bush, found themselves repeatedly buffeted by leaks as part of what came to be called a “revolt of the generals.”
And, notably, President Trump, having been vehemently opposed by the national security establishment, faced repeated leaks throughout his time in office, with damaging, well-sourced stories appearing in major newspapers on the eve of any number of important policy shifts. For instance, as the administration prepared for withdrawal from Afghanistan in the summer of 2020, the New York Times reported that “American intelligence officials have concluded that a Russian military intelligence unit secretly offered bounties to Taliban-linked militants” for killing American soldiers. The story was remarkably flimsy, and the government later walked it back, revealing that intelligence agencies had actually “not found any evidence” to support it, and had only assessed it with “low to moderate confidence.” But the full walkback came only after the election.
Even as low-level officials often face unemployment and jail time for even unintentional disclosures of classified information, no senior American official has ever been jailed for violating secrecy laws or for leaking classified information. Even as the Obama administration used the Espionage Act to stymie politically inconvenient leaks from low-level intelligence officials, classified information about perceived successes against terrorists abroad was leaked to sympathetic journalists. No charges were filed against Hillary Clinton for maintaining an illegal server with classified emails, nor were any filed against President Joe Biden for his illegal retention and storage of classified documents. The federal indictment of former President Donald Trump for his own flagrant mishandling of classified materials after leaving office, filed in June, is a first for a former president.
As director of the CIA and former head of Central Command, General David Petraeus was caught sharing information on some of America’s most sensitive secrets with his biographer and lover, Paula Broadwell. While the FBI recommended felony charges, he eventually pled guilty to a single misdemeanor, before becoming head of a think tank at a private equity firm. Sandy Berger, President Bill Clinton’s former national security advisor, was caught stealing classified documents from the National Archives with the intent to hide potentially embarrassing details from the 9/11 Commission. His guilty plea (also to a misdemeanor) did not appear to have damaged business for Stonebridge International, the consulting company he co-founded after leaving office.
At the same time, Matthew Connelly and his team at the History Lab, a research center that maintains and analyzes a vast database of declassified government records, found that many of the most-redacted portions of certain documents — the secrets that federal agencies tried to keep the longest — were those that recorded embarrassing, incompetent, or illegal activity. The State Department hid that Winston Churchill believed that Franklin Roosevelt had conspired to allow Pearl Harbor to happen. The FBI had illegally wiretapped American civil rights activists, and its anonymous letter blackmailing Martin Luther King, Jr. and goading him to commit suicide was heavily redacted for decades; a complete version was discovered only in 2014. And the CIA has continued to fight the full declassification of records relating to the assassination of President John F. Kennedy as required by a special 1992 act of Congress. The CIA’s motivations appear to be not any legitimate security interest but the inevitable embarrassment and loss of trust that will follow further confirmation that the agency’s dealings with Lee Harvey Oswald were much more extensive than it admitted at the time.
Senator Moynihan would not have been surprised by all this malfeasance. How could it have been otherwise? The American system of republican government was expressly designed to contain and channel all-too-human ambitions, hubris, and foibles. “Ambition must be made to counteract ambition,” James Madison wrote in Federalist 51. And yet, the system of bureaucratic secrecy allows initiates to hide their errors, punish their enemies, disguise their motives, and avoid public scrutiny. Moynihan felt that, if American democracy was to survive, the expansive culture of secrecy had to be resisted: “It is time also to assert certain American fundamentals, foremost of which is the right to know what government is doing, and the corresponding ability to judge its performance.”
But fate was not to smile upon Moynihan’s call. As massive as the backlog of classified material was when the Secrecy Commission did its work in the 1990s, now that the Cold War was over and there was already a steady stream of unofficial leaks anyway, it was still possible to imagine a new era of open government. But Moynihan — who would die a few years later, six days into the American invasion of Iraq in 2003 — could not have imagined then the scale that secret government activity would reach during the Global War on Terrorism.
The purpose of government secrecy is to conceal information that would be damaging if it were publicly revealed: damaging not only to the national interest (the official purpose) but also to the bureaucratic interests of various organs of the government, to the private interests of its members, or to the political interests of the White House. But this system rarely ensures that secrets are never revealed. Government programs of almost any size leave a paper trail.
As Moynihan realized, this means that public knowledge of government wrongdoing cannot be eliminated by secrecy, it can only be delayed. The impact of some damaging event is publicly felt not when it occurs but when it is revealed. Ultimately, the secrecy system succeeds only in pushing this cost to some future date, making it someone else’s problem. Government secrets are thus a form of informational debt, of delaying the payment of political costs. The Cold War and the War on Terror created a massive such debt of secrecy, scandal, paranoia, and cynicism that is only now being repaid.
And yet, as with the national debt, the federal government has become addicted. The “vast secrecy system,” Moynihan lamented, “has become our characteristic mode of governance in the executive branch.” It balances its budget of political costs by siphoning them into the secrecy system, in much the same way that the energy giant Enron used off–balance sheet entities in the 1990s to soak up debt from the books and make the company seem more profitable than it was.
The informational debt of government secrecy has grown to such proportions that weird things are happening elsewhere in the informational system. The dark planet of secrecy is subtly pulling everything into its orbit — electoral politics, mainstream media, conspiracy theorists, even Hollywood entertainment. And stranger things yet will occur as secrets burst into the open in unexpected ways and at unexpected times.
In the Apocalypse of St. John, at the end of the world, the Lamb of God opens a sealed scroll: apokalupsis in Greek means an unveiling or disclosure. Media theorist James Poulos once pointed out to me that the causality between the unveiling and the end of the world goes in both directions. As the Apostle Paul says, the end of the world “will bring to light the hidden things of darkness, and will make manifest the counsels of the hearts.” At the same time, the revelation of secrets, especially those that lay bare the inner workings of power, generates a feeling of apocalypse. When a grave secret that upends everything you thought to be true is revealed, it can feel like the end of the world.
The national security state leaves a mighty, veiled signifier in the public sphere, unto which people may project their darkest fantasies. The state’s revealed secrets and the possibility of its hidden involvement are easily folded into any conspiracy narrative, in proportion to the public’s consciousness of the secrets. Like a Lovecraftian monster, the only thing worse than it lying hidden is it standing in all its eldritch glory, driving people insane at its mere appearance.
Digital communications have made it possible to collect mountains of secret knowledge. But they have also facilitated the mass revelation of secrets. Since 9/11, this pattern of accumulation and leaking has occurred with increasing frequency, to include the WikiLeaks cables, Edward Snowden’s revelations, the Panama Papers, DNC emails, IRS tax filings of the superwealthy, and more. These leaks destroy public trust, both in themselves and in what they show. But they also provide the raw material for building and, in a sense, “proving” conspiracy theories.
One substantial reason why many Americans believe in conspiracy theories is that the government conducts secret activities at massive scale at home and abroad and, indeed, because members of the government have verifiably lied to American citizens about those activities, both for ostensible national security reasons and to save political face. The growth of the national security state has led the normal functions of government to take on the form of a conspiracy. Consider the following demonstrably true recent fact patterns and ask yourself why any American would not find conspiracy theories plausible:
Analyses of conspiracy theories often miss the most remarkable fact about them: that a great many emerge adjacent to corresponding real intelligence activities or propaganda efforts. A small sample includes the Protocols of the Elders of Zion, a conspiratorial antisemitic document forged by the Tsarist secret police; UFOs, many sightings of which in the 1950s and ‘60s were later admitted to have been of secret U-2 airplanes and covered up by the Air Force; the JFK assassination, during the investigation of which the CIA repeatedly and publicly lied to cover up its covert activities in Latin America; rumors about the CIA inventing AIDS, which were KGB propaganda; paranoia about black helicopters, which in fact are used in American special operations urban exercises; Pizzagate and QAnon, which relied on hacks of Democratic leadership emails apparently by Russian intelligence; the Steele dossier and Russiagate, both of which involved activities of numerous intelligence operatives; and more.
This is not to suggest that the conspiracy theories are correct. Rather, as the world of secrets expands, the number of intersections (accidental and intended) that the secret world has with the world of facts grows. And these intersections, often badly understood and interpreted by outsiders, form the seeds of further conspiracy theories. Consider again the case of the CIA lying to the Warren Commission about its relationship to Lee Harvey Oswald. It did so, we can reasonably argue, not because of guilt but because of embarrassment. Oswald had been involved in fringe Left activities, lived in the Soviet Union, and engaged with Cuban revolutionaries. In the context of the Cold War, it would have been shocking if he had not had contact with the CIA. Similarly, the FBI and CIA knew a lot about the 9/11 attackers very quickly, because some of the attackers had come across the radars of U.S. agents or informants in prior years and because they were funded by Saudi and Pakistani networks the CIA was investigating. If you have an intelligence agency with a global reach, and it is involved in monitoring shadowy groups behind all kinds of sinister activity, it does not make it a puppet master. It means it is doing its job.
But this secret activity does come at an epistemic cost. Societies with lots of conspiracies and secrecy generate cynicism and a distrust of the “official narrative.” Just as it has done to the world of facts, the rise of digital computing has turbocharged this by greatly expanding individual access to secrets. Anyone can fire up Wikileaks and spend hours trawling through tantalizingly secret leaked documents that can then be fitted into one’s preferred narratives.
Poring over databases of leaked documents, FOIA releases, or investigative reports is a shared activity for online political narrative-building of all kinds. Scouring through original documents, working collectively to “find clues” and decipher meanings, and gaining clout from esoteric discoveries provides a game-like character to modern conspiracy theories, something that is only possible because of the sheer mass of formerly hidden information available online. It was the leak of Clinton campaign chief John Podesta’s emails, allegedly by the Russian foreign intelligence agency, that allowed online communities like 4chan to come together and develop the Pizzagate conspiracy theory, the seed around which the QAnon movement grew.
This relationship between government secrecy, leaked or released documents, and public revelations of government malfeasance has proven to be a powerful acid of institutional trust. Over the past thirty years, there have repeatedly been “conspiracy theories” about opaque government activities at the margins of the sprawling national security complex, subject to vehement denunciations and official denials, that were later verified to be essentially true by documents leaked or released by the bureaucracies themselves. Most recently, Jacob Siegel has documented how the attempt to defend American discourse against disinformation and conspiracy theories was itself the product of unprecedented coordination between shadowy NGOs, social media companies, and high-level intelligence officials. Senator Moynihan was right that, at an epistemological and psychological level, democracy and secrecy are incompatible principles.
The psychic weight of secrecy in American life has grown heavier because of increasingly overt interventions by the hidden world into media and culture. The threat of post–Cold War budget cuts convinced the CIA that it needed to market itself to the American people. The first movie officially made with CIA cooperation, In the Company of Spies, debuted in 1999. In March 2001, CBS green-lit a drama called The Agency, made with unprecedented access to the CIA campus. The pilot episode, which was due to premiere at Langley on September 18, 2001, revolved around the race to foil a major Al Qaeda plot against the West. The show ran for two seasons, with the premiere pushed off to a later date. The War on Terror saw an explosion of content made with the approval of the national security state. A 2017 piece in the Independent found that “on television … over 1,100 titles received Pentagon backing — 900 of them since 2005.”
The apotheosis of this marriage between Hollywood and the national security state was Zero Dark Thirty, released the year after the killing of Osama bin Laden. The Obama administration and the CIA’s aid to director Kathryn Bigelow was the subject of congressional investigations over the remarkable level of access to intelligence her team received. Journalist Naomi Wolf described it as “a gorgeously-shot, two-hour ad for keeping intelligence agents who committed crimes against Guantánamo prisoners out of jail.” Whatever one makes of the movie’s accuracy or fairness, it was produced in part out of a “perception management” operation by the White House and the CIA.
As sci-fi novelist Charles Stross predicted in his 2007 thriller Halting State, the intersection of entertainment and intelligence operations would render it impossible to distinguish between authentic and inauthentic cultural expression: in fact, it would obliterate the idea of a distinction. The revelations of clandestine activity that are crashing like waves with increasing frequency against our social psyche both illustrate and worsen this problem.
Ever since the CIA’s secret sponsorship of 1950s and ‘60s modernism in high art and literature first came to light in the late 1980s, Americans have been alive to the possibility that new movements, ideas, and aesthetics might be an “op.” The combination of leaks and delayed releases will only continue to pile up evidence of clandestine government interventions in all aspects of culture, art, entertainment, and advocacy in ways that undermine the notion of an independent civil society. As intelligence agencies from around the world accelerate their involvement in shaping the “global information environment,” especially in the American center of worldwide cultural production, it becomes increasingly naïve to deny the possibility. Participants in online message boards, especially around fringe topics, already joke about “fed-posting” and infiltration of their conversations. They’re not always really joking.
In Halting State, intelligence agencies make use of alternate reality games and online roleplaying games for their own purposes, including recruiting unwitting members of the public to spread information and carry out espionage activities under the guise of the game. The real cryptographic puzzle game Cicada 3301 seems as if it is linked to the hidden world. The co-founder and CEO of 42 Entertainment, the top production company for alternate reality games, began her engineering career in Lockheed Martin’s classified Skunk Works program. And whatever the real scale of actual intelligence activities, even the perception that feds and intelligence sockpuppets could be present heightens the stakes (and the romance) of any online political movement. Mysterious forces from the hidden world of intelligence are at battle, and anyone can join the #Resistance or help #StoptheSteal.
The impossibility of distinguishing “real” and manipulated online activity is only going to worsen. Earlier this year, to support his arguments about the war in Ukraine and to gain clout with his online friends, Airman First Class Jack Teixeira of the Massachusetts Air National Guard leaked dozens of top secret documents to a small chat server — exactly the kind of event predicted in another essay in this series, “Reality Is Just a Game Now” (Spring 2022). In the wake of this discovery, broadly described as the worst U.S. intelligence breach in a decade, conversation has turned to the need for federal counterintelligence officials to monitor and trawl online forums even more seriously than they already do. As Internet users pour ever larger amounts of attention, activity, and loyalty into online communities, we can expect the hidden world of intelligence to follow close behind.
However hard they push back against the influence of conspiracy theories on American politics, no amount of effort by mainstream media, the Department of Homeland Security, and the growing crop of misinformation NGOs that are popping up like mushrooms on the damp log of foundation interest will be enough. For those policy wonks concerned about the demise of trust in American institutions, actually becoming honest and trustworthy does not seem to have entered into the picture. Only the fulfillment of reforms to the government secrecy system that serious critics from both political parties have demanded for fifty years, and a true recommitment to openness, can restore Americans’ faith in their institutions. ♠
Let me tell you two stories about the Internet. The first story is so familiar it hardly warrants retelling. It goes like this. The Internet is breaking the old powers of the state, the media, the church, and every other institution. It is even breaking society itself. By subjecting their helpless users to ever more potent algorithms to boost engagement, powerful platforms distort reality and disrupt our politics. YouTube radicalizes young men into misogynists. TikTok turns moderate progressives into Hamas supporters. Facebook boosts election denialism; or it censors stories doubting the safety of mRNA vaccines. On the world stage, the fate of nations hinges on whether Twitter promotes color revolutions, WeChat censors Hong Kong protesters, and Facebook ads boost the Brexit campaign. The platforms are producing a fractured society: diversity of opinion is running amok, consensus is dead.
The second story is very different. In the 2023 essay “The age of average,” Alex Murrell recounts a project undertaken in the 1990s by Russian artists Vitaly Komar and Alexander Melamid. The artists commissioned a public affairs firm to poll over a thousand Americans on their ideal painting: the colors they liked, the subjects they gravitated toward, and so forth. Using the aggregate data, the artists created a painting, and they repeated this procedure in a number of other countries, exhibiting the final collection as an art exhibition called The People’s Choice. What they found, by and large, was not individual and national difference but the opposite: shocking uniformity — landscapes with a few animals and human figures with trees and a blue-hued color palette.
And it isn’t just paintings that are converging, Murrell argues. Car designs look more like each other than ever. Color is disappearing as most cars become white, gray, or black. From Sydney to Riyadh to Cleveland, an upscale coffee shop is more likely than ever to bear the same design features: reclaimed wood, hanging Edison bulbs, marble countertops. So is an Airbnb. Even celebrities increasingly look the same, with the rising ubiquity of “Instagram face” driven by cosmetic injectables and Photoshop touch-ups.
Murrell focuses on design, but the same trend holds elsewhere: Kirk Goldsberry, a basketball statistician, has shown that the top two hundred shot locations in the NBA today, which twenty years ago formed a wide array of the court, now form a narrow ring at the three-point line, with a dense cluster near the hoop. The less said about the sameness of pop melodies or Hollywood movies, the better.
As we approach the moment when all information everywhere from all time is available to everyone at once, what we find is not new artistic energy, not explosive diversity, but stifling sameness. Everything is converging — and it’s happening even as the power of the old monopolies and centralized tastemakers is broken up.
Are the powerful platforms now in charge? Or are the forces at work today something even bigger?
For decades after the First World War and the Russian Revolution, the profession of economics roiled with a theoretical debate with enormous practical consequences. The question was whether economies grew by getting better at calculation or at something else. During the war, each of the major combatants had engaged in massive economic mobilization, with varying levels of centralized planning of war production. Famously, after the war the revolutionary Soviet government instituted a centralized planning system. Would it work?
The first round of the socialist calculation debate, kicked off by Austrian economist Ludwig von Mises in 1920, argued that “rational economic activity is impossible in a socialist commonwealth,” because central planners had no mechanism to efficiently coordinate supply and demand. By contrast, market economies had a decentralized planner of a size and scope vastly more efficient than any computing power then available: the price system. Socialist economists took Mises’s argument in stride, on the one hand theorizing forms of decentralized planning called “market socialism,” and on the other developing new mathematical techniques to solve calculation problems, like the Nobel Prize–winning discovery of linear programming by Leonid Kantorovich. Whatever other challenges remained, calculation per se did not seem to pose an insuperable problem for economic planning.
In the 1945 essay “The Use of Knowledge in Society,” Mises’s student Friedrich von Hayek took the problem deeper than mere calculation. The fundamental barrier to central planning was not the decentralized distribution of desire or need but of knowledge. Market participants have unique local knowledge about the circumstances they face: the costs of making something, what they would accept as substitutes, their beliefs (right or wrong) about what other people might want. This knowledge was impossible to summarize and convey to a centralized planner, not least because it was in motion, “constantly communicated and acquired.” This knowledge operates not only through buying and selling but through making a prototype, viewing the available wares, shutting a business down, taking out a loan, and many more kinds of human activity. What makes markets efficient is not that they are better at arriving at a full accounting of supply and demand than a centralized planner, but that they never require anything like full knowledge in the first place, allowing decentralized coordination across actors who each only have partial knowledge of the whole. The old conceit of a market as an auction where buyers and sellers met at a single clearinghouse hid a social structure that was much more complex.
For Hayek, the market is not a price system or an auction. It is a network.
The Advanced Research Projects Agency Network, or ARPANET, was not the first computer network, but it was the most important. Built in the 1960s, it was the first to show what could happen if you emphasized the network rather than the computer.
During the birth pangs of the Information Revolution, computing power was precious. It was the era of the mainframe, massive heaping computers the size of entire rooms, where various functions like working memory and disk space occupied separate cabinets. In this paradigm, networking was how you connected terminals, peripherals, or smaller computers to the coveted power of the mainframe. At a satellite location, you might work on a program or feed in some data from a terminal, but you needed to run it on the mainframe. And so did everyone else. This hub-and-spoke structure dictated everything. Network capacity was about the power of the mainframe, with computing resources metered like an electric utility. Inconsistent workloads meant that programmers used “batch processing” and multi-user time-sharing, running programs as computing resources became available.
The mainframe sat at the center of a closed system. To work, everything had to be keyed to its needs, including the programming languages you used and the compatible peripherals you attached.
It is sometimes said that the Advanced Research Projects Agency created ARPANET in order to provide for command and control in the case of nuclear war. While this use case helped motivate RAND Corporation researcher Paul Baran’s development of distributed communications theory, it didn’t have anything to do with the motivations and goals of the people actually building ARPANET. And ARPANET didn’t really solve any problems for the big research laboratories, who already had powerful mainframes and who expressed wariness about the network “stealing” computing time from them. The engineer who was first presented with the request to actually build ARPANET said, “I can’t see what one would want such a thing for.”
So what was it for? The aim of ARPANET was to revolutionize human communication. That was the vision of J.C.R. Licklider, who jokingly referred to his vision of an “Intergalactic Computer Network” in which programmers could access resources and people anywhere in the network. This vision was elaborated by Bob Taylor, who imagined how the communities of researchers that were then beginning to form around individual mainframes could one day form around entire computing networks. Together the two wrote the seminal 1968 article “The Computer as a Communication Device,” in which they predicted that “in a few years, men will be able to communicate more effectively through a machine than face to face.”
Community, self-organization, and the expansion of human consciousness were baked in from the start. That was why so many members of the California-centered Human Potential Movement became early enthusiasts and adopters of networked computing. The Stanford Research Institute formed one half of the first-ever ARPANET exchange. And the Stanford group that worked on ARPANET was initially called the Augmented Human Intellect Research Center, led by Douglas Engelbart. Senior leaders like Engelbart, not to mention almost all the junior computer engineers who worked for him, were fully immersed in the California counterculture, as John Markoff showed in his 2005 book What the Dormouse Said. The project was the computational equivalent of the counterculture’s interest in reorganizing society by breaking free of imposed constraints and social norms in favor of new practices.
You can draw a straight line from the 1966 LSD-soaked Trips Festival to the 1967 Summer of Love to Engelbart’s 1968 “Mother of All Demos,” a public demonstration of how new networking and interface technologies would revolutionize how people worked together. If you had to give that straight line a name, it would be Stewart Brand, founder of the Whole Earth Catalog. ARPANET was ultimately not about getting computers to talk to each other; it was about getting people to talk to each other, to collaborate and work together and organize across distances. Anyone could connect to anyone or any resource to build anything: the “incredible popularity and success of network mail” was the “largest single surprise” of the entire project, according to the project’s completion report.
To get this to work, you needed to go beyond systems that were built to communicate to systems that were designed to communicate. Before ARPANET, distributed computer networks, like those used by airlines or the military, were built for distinct purposes, using hardware from the same vendors, custom systems integration, and a final plan of what the network would look like and what it was for. If you tried to add new hardware to the network or took out a mainframe that other parts of the system depended on, the whole thing could break.
ARPANET researchers overcame numerous technical challenges to build a network with the opposite approach. Different kinds of computers, using a machine called a router, could talk to each other. Special algorithms allowed data to get to the right place without the need for a perfect map of the whole network, which was constantly changing. As long as you spoke in the same language, you could add new parts to the network without getting anyone’s permission. Take a node off-line, and the network routes around it.
To describe the kind of communications required to get the network to function like this, researchers borrowed a term previously used for social etiquette or diplomatic convention. They called the grammar that computers used to talk to each other a protocol. Each protocol would consist of a formal procedure, a standard for interacting with a system, which anyone could adopt. For instance, just as mailing addresses have their own protocol, ARPANET would create protocols for addressing objects in the network. In the counterculture-inspired vision of Engelbart and his hackers, protocols would be developed and maintained by the community of users, open for anyone and free to license.
The vendor-driven computer systems beloved by the men in gray flannel suits got things to work by handcuffing the user: to specific hardware, specific computer languages, specific rules. The ARPANET vision of networked computers was of computing unshackled, as portrayed most powerfully in Apple’s iconoclastic 1984 Super Bowl ad, with Orwell’s centralized dystopia being literally demolished. You are totally free to build on top of the protocol, or to extend it different ways. It’s a carrot, not a stick. The reason to constrain yourself to the protocol standards is the power of building something that works with everything else that does the same. The protocol wasn’t just a useful software invention — it was a worldview.
The problem was scale. When you expand anything — a factory, a railroad, a community, a democracy — to a certain size, communication can break down in surprising ways. The sheer complexity of interrelationships and interdependencies becomes impossible to keep track of. This has always been the case, and new organizational technologies — the file cabinet, the mimeograph, the punch-card tabulator — have always been developed to help keep up with the deluge. But even in the era of mainframe computers, the complexity and amount of data began to outstrip the ability of any one decision-maker to make sense of it all. As Licklider and Taylor had put it in their 1968 article, “society rightly distrusts the modeling done by a single mind.” It was in the 1970s that the word “scalable,” in the sense of a system you can enlarge without breaking it, appears to have entered into the English lexicon.
Or maybe the complexity was always there, and it was just that modern computers gave us the tools to notice it with the right data, to see how the butterfly flapping its wings caused the hurricane. After all, mathematician Edward Norton Lorenz conceptualized the “Butterfly Effect” not on a chalkboard but when he made a minor typo entering meteorological data into a weather simulation and found a shockingly different result.
In the 1970s, two trends combined to shape the zeitgeist: sophisticated computer simulations of complex systems and ecological thinking driven by a sense that everything was connected — a realization fueled variously by atmospheric nuclear weapons testing, consciousness-boosting LSD trips, and the first pictures of the whole Earth from outer space. Thinking about inputs and outputs like a factory assembly line was out. Holistic thinking about feedback loops and emergent properties was in.
And it came with a new computing paradigm, too: cellular automata. If you tried to create a whole system all at once, God’s-eye-view-linear-programming-style, even the largest mainframe computers would spaz out with only a few variables. But you could imitate much more complex systems — like cities, rainforests, or weather patterns — using only a few parameters. In the 1970s, British mathematician John Conway’s Game of Life showed the way. Technically known as a cellular automaton, the Game of Life is essentially a large game of tic-tac-toe that plays itself. By creating a grid of cells that are either filled or empty, and simple rules for how each cell changes based on its neighbors, complex patterns emerge.
Bring this new computing paradigm together with books like Jay Forrester’s Urban Dynamics (which used computer simulations to model cities) and Jane Jacobs’s The Death and Life of Great American Cities (an attack on the linear modernism of urban planning, focusing instead on the city as an organic system) and you get a new, and addictively fun, way of making sense of the world: the simulation video game.
Will Wright’s 1989 game SimCity allowed players to design and manage their own virtual cities, dealing with everything from city budgets and infrastructure to disasters. The challenge came from each of the underlying systems shaping the others in unpredictable ways. Summon an off-brand Godzilla to maraud through your city, and watch the housing density pattern and the road network change in the subsequent re-development.
Emergent properties, ecological thinking, self-organizing systems, complex interdependence — the whole paradigm is there on screen, re-wiring not only the virtual city but the player’s view of the world.
Sometime around 2002, Jeff Bezos issued a mandate that would lay the foundation for Amazon to become one of the biggest companies in the world.
Amazon was growing like gangbusters and re-investing all of its profits into growing more. Having survived the dot-com bust, the company now found itself at the forefront of e‑commerce just as a majority of American adults logged on to the Internet.
Amazon discovered that you could not run a company at the scale of the global Internet the way you ran a normal company. At that scale — not only of users, but of data, of speed, of items for sale, and of revenue — it was easy for things to break.
Bezos’s mandate was designed to force every team, every product manager, every engineer to build for scale. And it had some extraordinary second-order consequences. The mandate was immortalized by former Amazon software engineer Steve Yegge, who, after going to Google, was trying to explain why Amazon was in many ways a more successful company. He thought the mandate held part of the answer.
Bezos’s earth-shattering mandate, as remembered by Yegge, went like this:
It isn’t necessary to understand the technical details, or why issuing this mandate in the dial-up Internet era was, in Yegge’s words, a “huge and eye-bulgingly ponderous” act. To simplify, traditional software teams would build new features that hooked into existing programs. If you wanted to allow users to subscribe to a product, you might pull their address information from an existing database and build your new subscription software on top of existing programs that allowed you to charge a user’s credit card. This approach is resource-efficient, but it creates dependencies, obvious and not-so-obvious ways in which new programs rely on older ones. With Bezos’s mandate, the Amazon teams were forbidden to do any of that. Each program needed to run entirely on its own, hooking in to other Amazon services only by sending them a defined set of inputs and receiving and reacting to a defined set of outputs. That is what a service interface means.
By analogy, imagine if you took a restaurant kitchen and made each station its own mini-business. Making a hamburger would mean buying the raw meat from the refrigerator, paying the griddle to take the meat and receiving a cooked patty in return, then paying the condiment station, and so on.
At an individual scale, this would be insane. But the traditional model breaks down as you make it larger and larger, scaling to millions of hamburgers in thousands of locations. There’s a reason most restaurants don’t slaughter their own cattle. At increasing scale, success depends on your ability to balance across a supply network, route around bottlenecks or breakdowns, and solve problems in a decentralized way. It looks like a market. It looks like a network.
Have extra server capacity? Let anybody purchase it (Amazon Web Services). Building a warehouse infrastructure? Let anybody use it (Fulfillment by Amazon). Have a shipping service? Let anybody deliver through it (Amazon Shipping). But the new businesses Amazon created only scratch the surface of the new kind of organization the company achieved. Amazon transformed every aspect of its business from the logic of mainframe computing to the logic of networked computing, and it did so by requiring every part of its business to communicate in protocols.
Conway’s Law says that organizations build systems that are copies of the communication structures of these organizations. In order to match his ambitions, Bezos had to reorganize Amazon for global scale. The mandate made Amazon into a business shaped like the Internet.
While they don’t agree on much else, critics and champions of contemporary capitalism share an assessment of the most important transformation of late-twentieth-century economics. The political economy of the mid-century, particularly in America and Europe, had been characterized by trends toward social democracy, environmental conservation, regulation of labor practices, and rising income taxes. Compared to the era before World War I, there were higher levels of tariffs and trade protectionism, more barriers to international investment and financial flows, and lower levels of international migration. In the 1970s and ‘80s, faced with straining government budgets, stagnating growth, inflation, and other economic problems, policymakers looked for a new paradigm.
The program they turned to is often called “neoliberalism,” and it is usually described as a governmental withdrawal from many fields of activity in favor of a revitalization of free-market thinking and an extension of the logic of decentralized coordination to ever more areas of life. The locus of political power began to shift from legislatures, which are easily gridlocked, to regulatory bodies, public–private partnerships, and independent central banks. Political life in the most powerful states was supra-nationalized in institutions like the European Union and the World Trade Organization to match the scale of these states’ power, while small states faced pressure to adopt the set of fiscal and trade policies that came to be known as the “Washington Consensus.” Between the 1970s and the 2000s, neoliberalism remade the global political economy and reshaped almost every society in the world.
However, as Quinn Slobodian demonstrates in his 2018 book Globalists, when you descend from the theory to the practice of neoliberalism, the dominant action is not the removal or withdrawal of government interference but rather the imposition of new tools of governance to actively impede political interference, while making possible ever more fluid movements of labor, capital, and trade. His account of the Geneva School of neoliberalism traces the activities not only of thinkers like Friedrich Hayek and Milton Friedman but of lesser-known actors like international lawyer Ernst-Ulrich Petersmann, who advised organizations like the U.N., the Organization for Economic Co-operation and Development, the European Commission, and the World Trade Organization.
Rather than simply reducing the size and scope of government, neoliberalism invented new tools of governance. There was comparatively less emphasis on executive governance or legislative contestation. Instead, policymakers “set the agenda” through regulations, rulings, standards, ratings, and best practices defined by new metrics and reports. These changes would be issued not by diktat but in coordination with “stakeholders,” who were expected to be active participants in their own governance. Market and social actors would be set free from political control, in exchange for participating in new forms of political oversight to manage the tidal wave of dynamism.
For example, neoliberalism is often described as lowering barriers to global trade. But high tariff rates or protectionist quotas were far from the most important impediments to trade. The biggest barrier to trade was communication: the jumbled assortment of local rules, practices, and laws would-be merchants had to navigate. On the ground, lowering barriers to trade actually looked like creating shared protocols governing every part of the trading process: international air cargo handling (the Cargo Services Conference Resolutions), the size and shape of shipping containers (ISO 668), bills of lading (the Harmonized Commodity Description and Coding System), invoicing and accounting (International Financial Reporting Standards), and far more. Often, these were not even set or mandated by governments: international organizations and trade associations developed their own standards, maintained by technical committees and published for anyone to use.
The deregulatory agenda of political leaders like Margaret Thatcher, Ronald Reagan, and Deng Xiaoping only cleared the way for neoliberalism’s real power: designing the world economic system for openness through shared protocols. When seen through this lens, it seems that larger forces even than the Reagan Revolution were at work.
After taking LSD in a Southern California desert in 1975, the French historian Michel Foucault developed a fascination with neoliberalism that has puzzled many as shockingly uncritical for a thinker who had made his name tearing off the masks that new forms of power used in order to conceal themselves throughout history. The kind of power that neoliberalism could wield seemed curiously invisible to him. He understood neoliberalism as a “technology of the environment” that incentivized people to behave in certain ways by shaping their economic situation. Compared to previous epochs of power, it was a “massive withdrawal with regard to the normative-disciplinary system.” As sociologist Daniel Zamora has put it, Foucault “understands neoliberalism not as the withdrawal of the state, but as the withdrawal of its techniques of subjection.”
Foucault’s friend and contemporary Gilles Deleuze tried to put a finger on what Foucault was missing. In his “Postscript on the Societies of Control,” Deleuze identified the new mode of power that was growing in the West. The old disciplinary societies that had enclosed their wards in different systems — the school, the prison, the factory, the hospital, the army — were giving way to more flexible societies, what he called “societies of control.”
In this new kind of society, control mechanisms steer us gently at all times, acting not by pushing (the stick) but by pulling — bringing new information, new models, new desires to our attention (the carrot). Crucially, the new control society presents its power as choice. You are free to choose to do what you want; the system just provides you with information, and tracks (or surveils) your choices. In the words of philosopher Byung-Chul Han, it seeks “to please and fulfil, not to repress.” The core technologies of the disciplinary society, Deleuze explained, were for containing and releasing energy — think steam engines, railways, and factories. But in the control society, the core technology is the networked computer, which is for continuously gathering data and imposing control by numbers.
With this shift, our sense of self changed too. Deleuze compared the self of the disciplinary society to a mole, which burrows in and then makes itself comfortable within the bounds of the enclosures to which it is subjected. But in a control society, the self is more like a snake, an undulating project moving from one state to another, never quite at rest, always getting ready to shed its skin at the next stage of self-becoming.
The secret to how power operates today is that it looks like freedom. The control society uses data to build everyone a customized choice architecture in which the “rational” move, the “optimized” move, is always more: do more, work more, buy more, know more, scroll more, sleep more, relax more. The openness and positivity of the control society — giving you more choices, more options, more information, more efficiency — becomes a form of power.
It’s not obvious, but the secret sauce of the control society is the protocol. You would never be able to pull together all the data, make sense of it, and create the architecture of “more” in a centralized fashion. But open protocols allow information, desire, and everything else to flow to where it is needed. They allow all sorts of people to try all sorts of things. Many protocols fail, but the overall effect is to create a precise simulation of every social desire, “spontaneous order” not just for marketplaces but for everything. Like the Internet, in the control society there is something for everybody.
Here is what I mean. Let’s say I’m streaming Agatha Christie’s Poirot and I become intrigued by a fountain pen wielded by Sir David Suchet’s dashing Belgian detective. From this first little nub of desire, I search Google to learn about the pen, finding a highly up-voted post on the r/ fountainpens sub-Reddit with more information and a link to an online pen store. When I click it, marketing tools like Meta Pixel flag my interest. Later, as I scroll Instagram, I start to see more posts featuring fountain pens, and I start following a few fountain pen influencer accounts. One day, I see a pen I love, and purchase it directly from a store, supplying my email address for a discount. The more I press on in this direction, the more fountain pen content — not just advertisements but posts, articles, memes, and so on — flow to me. None of this is “designed” by a Big Pen cartel: rather, open protocols connect a network of actors with their own goals and incentives — Redditors, pen obsessives, manufacturers, online pen shops, ad tech companies — that “spontaneously” hook in to and meet my desires. (It goes without saying that, beneath this example, there are thousands of technical protocols operating my web browser, Netflix, the payments system, the luxury pen supply chain, and so forth.)
The result is what philosopher Antón Barba-Kay describes, in the name of his 2023 book, as “a web of our own making.” Because all that the control society does is offer us choices — albeit ones optimized for our desires — we hold ourselves responsible for them, at a limbic level, even as we are increasingly surrounded by a super-stimulus system optimized to fulfill our desires. Deny it if you like, but the TikTok algorithm knows your inward thoughts. Nobody made you linger over that video. And whose fault is it if you DoorDash McDonald’s at midnight? Nobody made you do it. If you really wanted to you could abstain, just as if you really wanted to you could hit the StairMaster at the gym. Hopelessly scrolling through Instagram? No one is making you. No matter which direction you want to go in the network of desire, the choice is yours, and the protocol will help you plug in to the businesses, influencers, ideas, and communities that will meet your wants. And if you’re not happy with the existing market offerings? The protocol means that you and anyone else can make your own. The choice is yours.
This liquidity and openness also underlies neoliberalism’s paradoxical narrowness — that this world of radical choice results in convergence on the same ideas, platforms, aesthetics, and products. Margaret Thatcher’s slogan “There Is No Alternative” is the natural outworking of a system where the most optimized, the most popular, the most viral, the most efficient anything can be known with objective certainty. The global Airbnb aesthetic, the moneyball three-pointer, the Marvel movie, the paintings of The People’s Choice: these are not imposed by any cabal; they are the mathematical average of actualized desire, the calculable outworking of information flowing freely. They will be displaced not by some authentic vision, but merely by the next algorithmic average.
The networked computer imposes what the Thatcherites liked to call “market discipline” over everything: the ever-present possibility of users switching to a superior offering means that even monopolists can’t rest on the laurels of network effects for long. The only way for platforms to maintain their power, in the long run, is to anticipate and pre-emptively adapt to their competition. When TikTok builds its superior feed, every other social media platform must TikTok-ify itself or get left in the dust. Western Union finds itself competing not just against banks but against payment platforms, fintech startups, and cryptocurrencies.
Jeff Bezos gets credited for the line “your margin is my opportunity,” but this is really the protocol speaking. John Gilmore, an Electronic Frontier Foundation co-founder and Internet protocol creator extraordinaire, once boasted that “The Net interprets censorship as damage and routes around it.” Swap out “censorship” for “rentier profits,” “political correctness,” “outdated systems,” “good manners,” “boredom,” or any other barrier to efficiency or desire, and you get a sense of the shaping power of a protocol society.
How do you wield power in the world of the protocol? The contours and stratagems of protocol power are in some cases so alien as to not register as a form of politics at all.
The exercise of power begins in the design of the protocol itself. Any protocol will strike a balance between breadth and depth, depending on the problem it is trying to solve and the stakeholders it is intended to serve. Who designs the protocol is often a contested question: some emerge almost organically from within a community (think of the rise of the hashtag), others from a technical committee of interested parties (like those of the World Wide Web Consortium), and some from a party that has designed a protocol from scratch and releases it to the world, seeking its broader adoption. In some cases, as with the videocassette format battle between Sony’s Betamax and JVC’s VHS, particular companies or actors serve to benefit from the adoption of one protocol over another.
The most powerful element of protocol design, though, is not this or that engineering choice but the winners and losers the protocol creates by its mere existence. Who is rendered “below the API” — that is, replaceable by automation? Who faces a glut of new competitors, or a glut of new customers? Who can organize or communicate that could not do so before? In the age of the protocol, groups attempt to protect their interests by controlling or even forbidding the construction of protocols that would harm them, or by building protocols that undermine their competitors. So, for instance, lawyers for Uber and Lyft helped to dismantle the regulations that sustained taxi guilds all over the world, but would never do anything to harm their own profession’s unique privileges.
The power of protocols comes from what economists call “network effects”: the more people use a protocol, the more valuable it becomes. When, almost as if by Darwinian selection, one protocol has emerged as the universal choice, it can be very difficult to move away from it. While many powerful forces may work to establish protocols beneficial to their interests, these network effects are not the product of a decision. They come from the incentives that everyone faces, as a stone is held in place by its own weight.
We usually think about this effect in terms of the scale of a network, but every network in fact has a particular structure, and these structures tend to be highly sticky. Path dependency means that those who win early win more. The difference between Detroit and Cleveland in American automotive manufacturing, or between Palo Alto and Pasadena in high-tech electronics, emerged from a small number of early advantages that slightly inclined the playing field in one direction over the other.
Whatever their cause — early adoption, favor by an algorithm — one of the emergent properties of a protocol is that it will bless some and not others with network centrality. Some will become uniquely connected or uniquely well-positioned, often in a manner subtle or even invisible to outsiders.
To early Internet thinkers like Kevin Kelly and Manuel Castells, the distinctive political formation the network made possible was the “swarm” or the “crowd.” This kind of decentralized, emergent coordination is characteristic of open systems in which the independent incentives each individual faces can lead to unexpected synchronicities, or focal points toward which everything suddenly rushes in, often overwhelmingly so. “Going viral” is functionally the same as experiencing a distributed denial of service attack. Color revolutions, fashion fads, flash mobs, meme stocks, and moral panics all have the same structure. The bigger the network, the more open the structure, the more potential a protocol has to generate a swarm. As Kelly wrote, “An animated swarm is reticulating the surface of the planet. We are clothing the globe with a network society.” (Or, in the words of Marc Andreessen, “software is eating the world.”)
The most important feature of the swarm is what it is not: it is not a “we,” a movement or a community that one joins. Its constituent members may not even be aware that they are acting collectively and may have quite different incentives and goals. If anything, the swarm seems to have an alien will, a collective direction that may be quite at odds with the beliefs and desires of any individual within it — literary theorist René Girard identified the swarm with the Satanic.
At the same time, most swarms are not truly leaderless. In Girard’s analysis of scapegoating in I See Satan Fall Like Lightning, he fixates on the story of Apollonius of Tyana targeting a beggar. The magic of Apollonius’ action is in goading and prodding the right people behind the scenes to create the cascade of emotions and actions that generates a stoning mob. We see the same thing at work in “cancellations” today — powerful influencers at the periphery who have mastered the art of generating and directing the swarm.
Many protocols are not totally decentralized or voluntary. They may rely on some discrete platform to provide fundamental infrastructure, whether it is software that makes the protocol accessible as a service, like Twitter or Uber, or something more foundational, like the Internet service providers that actually connect users to the Internet.
Platforms have a unique ability to exert power over the protocol using artificial limits. They can ban users, block payments, censor posts, blacklist IP addresses, halt shipments, or otherwise impose restraints not found in the protocols themselves.
The most powerful example is what political scientists Henry Farrell and Abraham Newman have dubbed “weaponized interdependence”: the ability of sovereign states to leverage control over key chokepoints in global networks to exclude adversaries and protect their interests. The United States, for instance, uses its powerful control over banking protocols like SWIFT and critical technology in the global supply chain to levy sanctions on Russia and hamper Chinese GPU development.
Carl Schmitt’s famous dictum that “Sovereign is he who decides on the exception” gains a new meaning in the age of the protocol. Unlike other kinds of protocol power, platform sovereignty is nakedly political, one of the reasons why many of the criticisms of Big Tech companies focus on their abuse of this power.
That said, there is no free lunch. Some protocols — think Bitcoin or BitTorrent — are designed to escape command. And the network interprets coercion as damage and routes around it: as platforms abuse their power, the network calls forth alternatives, workarounds, and new protocols to escape their control.
Public discourse has fixated on the rare and ineffectual exercises of platform sovereignty — who is banned, which posts are censored, which countries are sanctioned — because in the age of the protocol they are the only exercises of power still familiar to us.
In every other regard, we find ourselves bewildered. If politics is about the question “who decides,” protocols are profoundly anti-political. No one decides. No one is in charge. At the same time that every individual faces more choice, more freedom, more optionality, we find ourselves in a society characterized by no agency, no accountability, no center, no one to hold responsible.
In his new book The Unaccountability Machine, Dan Davies describes the emergence of “accountability sinks” in complex systems, where a decision is ineffably delegated to a policy or a computer system such that no human appears responsible or “in charge.” But while some of the systems Davies examines are the result of bad design or even a malicious attempt to avoid responsibility, in the age of protocols we can also expect accountability sinks to develop automatically, as an emergent phenomenon. When it is no one’s duty to take care of the whole, no one can be held responsible for what falls through the cracks.
Things are supposed to work. And if they don’t, whose fault is that? Everywhere, the political shall has been replaced by the economic should.
High demand from riders should get more ride-share drivers out on a Friday night. Draconian Covid policies in China should get entrepreneurs in Vietnam and Thailand to set up alternative supply chains. Your Bluetooth headset should automatically connect to your laptop. Your DoorDash delivery should be placed, complete and intact, on your doorstep.
But when these outcomes fail to materialize, who exactly is to blame? Whose job, exactly, is it to remedy the situation? And the more decentralized and scalable, the more disintermediated the protocol is, the more agency and responsibility evaporate into the ether. In a decentralized system, agency becomes invisible. And as complexity grows, a system totally transparent in its processes becomes totally opaque in its governance. Because the process rules, power flows to those who have mastered it: those who know the process, who can change the process, who can create an exception to the process, who can direct the attention of the process.
This is why we suspect that those who most loudly proclaim they are “following the science,” “following the process,” or “following the markets” are actually engaged in elaborate forms of ventriloquy. We can sense that power is operating. We believe we can tell when we are being disadvantaged and others advantaged. We believe we can sense — in the direction of the swarm, in the outputs of the algorithm, in when the protocol does or does not deliver the goods — some hidden hand, some force behind the scene. We experience growing paranoia about manipulation, and the growing reality of manipulation, in almost no relation to each other. We know that nothing that channels so much power and wealth, on which so much depends, can ever escape politics. But we cannot glimpse the operations of power: the protocol demands that the most effective exercises of power are the most invisible.
Protocol politics is fundamentally characterized by acephalousness — no head, no agency, no accountability. And yet we feel the ghost in the machine, the power that shapes the contours of our lives, even as we can almost never pin it down. As recounted in the last entry in this essay series, “An America of Secrets” [Summer 2023], we occasionally catch power after the fact thanks to transparency laws or government leaks. These serve not to bolster the legitimacy of the protocol but to make us wonder about all the things we missed.
Paranoia induced by our increasingly formless experience of power is a key factor driving dreampolitik in the United States. That it takes different forms on the American left and right owes largely to a difference in how each encounters the power of the protocol, leading to what Matt Yglesias has dubbed “The Crank Realignment”: anti-establishment conspiracy theorists have migrated from the left to the right over the past two decades.
Over the past two decades or so, the American left has become increasingly dominated by the Professional Managerial Class of college-educated, white-collar workers. This class vanguard understands far better than any on the right how to work the protocol levers of modern society. They know how to make adjustments deep in the infrastructural underbelly of modern organizations.
It is the failure of these methods to solve society’s ills that is now generating a crisis of faith and a search for new approaches among policy elites, and an attendant zealous cult among activists and die-hards that wants to double down on the protocol society. In areas as disparate as primary school education outcomes, working-class life expectancy, criminal justice, and trade liberalization, much-hyped reforms to antiquated government programs or policies — think Obamacare, NAFTA, or Arne Duncan’s education programs — failed to turn things around or created new problems. Progressive believers need a scapegoat for the failure of protocol governance to deliver the goods. They have found one in “systemic racism” and other systemic -isms, which are the social equivalent of the “systemic risk” that had nearly destroyed the global financial system. In both cases, a complex web of unacknowledged problems, policies with hidden risks, inadequate metrics, and short-sighted leaders had created an institutional crisis whose boundaries were everywhere and nowhere. The progressive ideology that some have labeled “wokeness” is really the protocol society trying to save itself from itself by radically doubling down on the left’s preferred tools of governance.
Not only does wokeness not threaten the status quo — it promises to patch the holes that status quo institutions have already been seeing. It is no coincidence that the biggest institutional boosters of wokeness were also the most stalwart advocates for the shift toward neoliberal governance: the Ford Foundation, the Open Society Foundations, technology and media companies, Fortune 500 corporations, large financial institutions, and the European Union. Their legitimacy and power have been greatly enhanced by the neoliberal turn and they would very much like to keep the status quo in place. They are perfectly happy to invest resources in fixing it.
This is why wokeness as a practice looks a lot like more protocol governance: a proliferation of regulations, metrics, scorecards, ratings, accreditations, standards, best practices, and all of the attendant compliance jobs. A woke Millennial banker might return from a mid-afternoon self-care break or a privilege-decentering mindfulness exercise to prepare a PowerPoint on Dodd–Frank compliance obligations for financial risk management standardization.
If these techniques have failed, progressives believe, it is only because they were not applied deeply enough. “Doing the work” means instantiating standards and best practices — of racial justice, sexual non-discrimination, and more — not in process, behavior, or policy but in one’s own soul.
In contrast to the managerial classes, the denizens of Middle America know exactly what has happened to them: the American way of life has been hollowed out. And they know exactly who is to blame: the coastal elite. But they have no idea how this has happened. For the American losers of globalization, a theory like QAnon provides a factually distorted but spiritually true fable of the conflict shaping their lives. It is current history through the funhouse mirror of a Hollywood thriller.
QAnon and similar conspiracy theories have proven most attractive to the small-business bourgeoisie and the heartland working class. In American life, they have been the losers of neoliberalism and globalization. Where the small-business bourgeoisie once benefited from artificially lower costs and a large and growing market, they increasingly find themselves squeezed by international competitors on one hand and concentrated monopolies on the other, all while the administrative state continually seeks to roll back the size exemptions in regulations that had once provided a moat against Big Business. The shift in wealth and power toward large cities has also taken jobs and dynamism from exurban and small city areas where the heartland working class engaged in manufacturing, agriculture, energy, retail distribution, and warehousing. Regardless of their personal economic circumstances, the small-business class and working class are likelier to live in parts of the country whose life chances are ebbing away, and to count in their immediate social networks victims of offshoring, drug abuse, or war.
Why is the right today more susceptible to conspiratorial thinking? It has nothing to do with a so-called authoritarian personality or any other microwaved mid-century psychobabble. Loneliness is growing fastest among the groups that constitute the Republican base, including rural people, older people, residents of post-industrial areas, and low-education whites. Their life expectancy is dropping, and deaths from despair are on the rise. The Republican Party today is the party of the America that is being gradually destroyed. As Nicolas Guilhot put it in a Boston Review essay on the social sources of QAnon, “the proliferation of conspiracy theories reflects the dismal poverty of a political culture that fails millions of individuals confronted with the loss of their world.”
Like any colonial subject, Middle Americans have a keen sense of who has stolen their country from them. But the citizens of “flyover country” are hostile to, and proudly ignorant of, the work-ways of the Professional Managerial Class. Even the elite of the “small business bourgeoisie,” while they may be worth hundreds of millions or even billions of dollars, tend to operate in the business sectors whose models have been least overtaken by protocol governance at the level of the firm: energy, construction, logistics, manufacturing, and real estate. Many of them made their fortunes from the destruction of the prior New Deal regulatory regime, but they did not understand that they would have to pay the piper as political and social demands for control emerged in a new key. Middle Americans have lost any feel for the new grammar of power — indeed, that is how they got into this situation in the first place.
As a result, those truly responsible for the hollowing out of America are completely obscure to their victims. QAnon and other conspiracy theories, such as birtherism, emerged out of right-wing populism in the wake of the financial crisis of 2008. That crisis and its aftermath is a critical moment in the origin stories of Steve Bannon and other key influencers of the right-wing conspiracy metaverse. And so it is useful to consider the Tea Party’s diagnosis at the time. Right-wing populists understood that something had gone very wrong with the American constitutional republic but evinced no serious engagement with how power operates today. The complex problems of the financial, real estate, and health care sectors, all against the backdrop of global financial and trade flows, were reduced to a caricatured platform focusing on the constitutionality of laws and on lowering taxes. Many Tea Party proposals called for restrictions on legislative activities that had long gone extinct in practice, and completely ignored the displacement of power to outside the public sector. In 2012, Mitt Romney was hurt by his association with the “big bankers” and “Wall Street types” who had sought a bailout, but there was no sense that his critics on the right actually understood how a company like Bain Capital had operated or why it might be bad for America. Popular right-wing politics in America has almost become defined by its ignorance of how power and money operate today.
Bewildered by the layers of bureaucratic decision-making and professional standards-setting that end up imposing gender ideology in schools or replacing good, stable jobs with gig-economy wages, the right has resorted to a kind of kabuki-theater version of the story. The only thing more difficult to accept than that your way of life is being destroyed by insidious, malicious forces bent on destroying you is that your way of life is being destroyed entirely as a byproduct of impersonal global forces that are completely indifferent to the suffering they cause, perfectly willing to rip apart communities and families for increasing marginal profit. The result is a surrealist dream-state fantasy projection by which threatened Middle Americans work out the real intuitions infringing on their subconscious.
QAnon is what you would get if you gave a mediocre Hollywood screenwriter a theme — the destruction of the American way of life by a corrupt elite — and asked him to fill in the details. In contrast to the central figures of true conspiracies, who are almost always hidden deep in the bowels of a bureaucracy or network, the central figures of distorted conspiracy theories are almost always notable to start with. Q-type conspiracy theories take decisions that are largely made by countless grant-writers, management consultants, tax lawyers, and nonprofit executives and attribute them to Bill Gates or George Soros.
And yet, the most important polarity in American politics in the future will not be between Democrats and Republicans, or even the Professional Managerial Class and Middle America. Because the protocol is where power resides, the struggle that is only now beginning to emerge will be between two protocol elites: the managerial protocol elite of regulation and the technological protocol elite of computer code.
Though it may not be fully understood for many years, the advent of large language models now makes a clash of titans inevitable.
For the managerial elite, LLMs promise the ability to standardize and regulate with an automation and precision that was impossible before, using engineered prompts to finally scale regulation and platform governance to match the decentralized scale of the underlying technical protocols. Moreover, if AI regulations lead to a monopoly or oligopoly of foundational AI models, it would rebuild Internet civilization in the model of the mainframe computer: hub-and-spoke, centralized, controllable.
For the technological elite, by contrast, LLMs, along with technologies like Web3, promise the ability to free protocols from the only remaining constraint upon them: the need for a human programmer to make the connections between one protocol and another. Text-based protocols — popularized in the early World Wide Web era to make it easier for humans to build for the Internet — now make it trivially easy for LLMs to automatically translate across protocols or build new ones on the fly (like the ones interspersed in this essay). The only thing standing in the way are the management elites and their regulation-based protocols. AI seems like it could automate a lot of what they do too. Political contestation in the future will look a lot like a struggle over which protocols will win out.
Some of the oldest Internet communities were formed on Usenet, or User’s Network, a distributed discussion system of protocols for generating, storing, and retrieving news articles and posts that launched in 1980. Many of the concepts and practices of contemporary Internet culture originated on Usenet. On top of the protocols that ran the servers, early netizens developed protocols of the older sort: shared terms, interests, conversations, etiquette — in short, a shared culture. That was, until the Eternal September.
Every September, users would complain about the flood of newbies on Usenet who had gained access to the Internet for the first time, as college freshmen. Until they absorbed Usenet’s culture, they were a nuisance. But in 1993, America Online debuted direct access to Usenet for its customers, and wave after wave of newbs overwhelmed Usenet groups. Usenet’s culture never really recovered, giving rise to the idea of an eternal September.
The scaling power of the protocol tends to flatten anything human in the direction of what the protocol makes possible. The Eternal September marches on, billions of Internet users soon to be matched by trillions of Internet-connected devices and AI agents.
From the postwar years until the 2010s, Western elites heralded the power of globalization to usher in a new age of human flourishing. But around 2016, they began to realize that the protocols they had built had leached power away from the traditional institutions from which they derived their power. Ever since, elites have been attempting to regain control through lockdowns — of borders, of cryptocurrencies, of misinformation — in a last-ditch attempt to reimpose the logic of the centralized mainframe over the world of networked computers. Absent the kind of totalitarian power the Chinese Communist Party exerts, efforts like the Department of Homeland Security’s Disinformation Governance Board seem doomed to not only fail but immediately backfire.
We have to live here now, in the world built out of protocols. We have to build new habits, new institutions, and new ideas to make sense of it. After the beginning of the Eternal September in 1993, recovering Internet culture meant retreating to more felicitous protocols, like forums and blogs. We face the same challenge on a civilizational scale.
The Internet writer Realityspammer sees the potential we are being forced toward: “Is culture truly and irreversibly stuck? No, there are all kinds of opportunities and resources for those with the vision to create new memetico-political assemblages” — new ways for people to model their actions for each other, new patterns of coordination, new habits, new ways of organizing culture — “but it cannot be done through aping the old means.”
In the same way that the industrial age called forth a political science of management, we need a political science for the age of the protocol. We need to conceptualize new ways of asserting agency, new ways of finding both “exit” and “voice” in malfunctioning systems, ways of embedding protocols in the kinds of human communities that can generate legitimacy and accountability, ways of fighting the complexity and obscurity that can hide the exercise of power.
And we need to reclaim the paradoxical freedom of irrationality and self-limitation. In a protocol society, to default to responding rationally to incentives is to default to the swarm, to enslaving desire and burnout-inducing freedom. Deleuze argues that our age demands that we cultivate a new form of idiot savant, who can turn “the absurd into the highest power of thought.” In the era of GPS, there is no longer a road less traveled by, no shortcut known only to locals, no path that is your secret. To see something new, one must find what is not on the map,the absurd traversal across a roof or through an unlocked window. Or one must find new paths in time instead of space, refusing to optimize by claiming some place, artifact, or community as one’s own, the same way that a romantic relationship becomes something more when both partners refuse to look for a better one.
Make virtues of irrational attachment, cultivated ignorance, and stubborn loyalty. The day belongs to those who master the new tools for building, but who preserve in their hearts a secret garden of earnest loves untrammeled by the swarm. ♥