THE UNTOLD STORY: HOW THE JANUARY 6 RIOTS WERE PLANNED ON FACEBOOK—AND HOW THE COMPANY MISSED IT
The explanation for how Facebook’s defenses were so soundly defeated by such a hapless crew begins right after the Stop the Steal group’s take-down, and it starts right at the top. They described one another as grifters, prima donnas, and clowns. They lied, reflexively and clumsily, in pursuit of money and relevance. Through the power of social media, they would change the course of American political history.
In the aftermath of January 6, reporters and investigators would focus on intelligence failures, White House intrigue, and well-organized columns of white nationalists. Those things were all real. But a fourth factor came into play: Influencers.
Donald Trump was, in a way, the ultimate right-wing influencer, skilled at gaming his platform of choice, Twitter, to bend the news cycle to his will. Behind him came a caravan of crazies, hoping to influence their own way to stardom. For many of them, the platform of choice was Facebook.
The Stop the Steal Facebook group was born out of a mid-Election Day chat between a mother-daughter duo named Amy and Kylie Kremer, conservative activists with a history of feuding with rival groups over fundraising. Formed with a few clicks by Kylie, the group’s name was chosen in an attempt to squat on the #StopTheSteal hashtag trending on Twitter and Facebook.
In later interviews with the special January 6 Committee set up by the House to investigate the events of that fateful day, neither woman could explain why their group took off compared to other similarly named entities. The fact that the Kremers created it via an already-sizable Facebook page with verified status couldn’t have hurt. Within a few hours of its creation, Stop the Steal’s membership had expanded into the hundreds of thousands, with both everyday users and vocal online activists flocking to it like moths to a lamp light.
Facebook took it down, but that didn’t mean the movement, and its attendant influencers, stopped gaining ground. Things got messy as personalities vying for online attention through outrageous words and behavior clashed. Ali Alexander, an ex-felon and far-right conspiracy theorist, set up a fundraising website and an LLC, “Stop the Steal”, soliciting donations that the Kremers and other activists would later accuse him of pocketing. Brandon Straka, a New York City hairstylist and founder of #WalkAway, a movement that had a million followers on Facebook to ostensibly encourage Democrats to leave their party, also joined in.
The Kremers’ first attempt at convening the influencers on a conference call devolved into a shouting match. Straka sparred with Kylie Kremer, later telling investigators, “I found her to be emotionally unstable, and a—and incompetent.” Kylie Kremer in turn clashed with the conspiracy theorist media personality Alex Jones, who was also trying to get in on the action. She filed a report with Washington, D.C., police accusing him of “threats to bodily harm” after he allegedly threatened to push her off the stage while reportedly yelling, “I’m gonna do it. I’m gonna do it. I’m going to take over.”
Things ramped up once Trump tweeted, on December 19, that there would be a “big protest in D.C. on January 6th. Be there, will be wild!” The tweet blindsided the Kremers and White House staff alike, and it didn’t unite anyone so much as up the ante of the squabbling.
The stakes were getting bigger, and anyone who was anyone in the world of right-wing influencers wanted a piece of the action. The 73-year-old heiress to the Publix supermarket fortune donated $3 million in total to the effort, some of it going to Jones, some to Trump adviser Roger Stone, but a huge chunk—$1.25 million—going to Charlie Kirk, the founder of the conservative student group Turning Point USA, to “deploy social media influencers to Washington” and “educate millions.” When investigators later confronted Kirk with documents showing he had billed the heiress for $600,000 of buses that were never chartered, he responded by invoking his Fifth Amendment right against self-incrimination.
This band of self-declared patriots came together at a unique moment in American history, with social media coming fully into its power, and they had a ready audience in the social media-obsessed president in the White House. Trump wanted the likes of Alexander and Jones to speak at his January 6 event, according to texts sent by one of his aides, Katrina Pierson. Or as she put it in a text to Kylie Kremer: “He likes the crazies.” There was a nominal division between the “crazies” on one side and the Kremers on the other, but all were coming together to make sure January 6 would be unforgettable.
“I mean, there were so many things that were being said or pushed out via social media that were just concerning,” Kylie Kremer told investigators, while defending the decision to maintain a loose alliance with what she variously called mercenary, larcenous, and quite possibly mentally ill social media activists who were posting about civil war, 1776, and their willingness to die for liberty. “It took all of us getting the messaging out to get all the people that came to D.C.,” she said.
The influencers’ belligerence was the source of their power. “The more aggressive people, like the Alis and all those guys, they began to get a little bit more prominence because of the language that they were using,” Pierson told investigators.
Trump may have promoted the Kremers’ official January 6 protest on his Twitter account but, in the end, one activist noted, they collected only 20,000 RSVPs on Facebook. Ali’s bootleg site, pumped with louder language and even wilder conspiracy theories, pulled in 500,000.
Pierson’s “crazies” were, in fact, the luminaries of Zuckerberg’s Fifth Estate. “These people had limited abilities to influence real-life outcomes—if Ali Alexander had put out a call for people to march on the Capitol, a few dozen people would have shown up,” says Jared Holt, who researched the run-up to January 6 for the Facebook-funded Atlantic Council’s Digital Forensic Research Lab. “But it’s the network effects where they took hold, where people who are more respectable and popular than Ali reshape his content.”
To keep the influencers hyping the January 6 rally—but nowhere near the president himself—Pierson helped broker a deal for what she called “the psycho list” to speak at a different event on January 5. Amid a frigid winter drizzle in D.C.’s Freedom Plaza, Ali and Straka ranted alongside Jones, disgraced former New York police commissioner Bernard Kerik, and the guy behind the “DC Draino” meme account, which had 2.3 million followers on Instagram alone.
The next day, at the real rally, the Kremers instructed security to be ready if Ali, Jones, or Straka attempted to rush the stage and seize the microphone by force.
Straka told investigators that he would have liked to speak on January 6 himself but, barring that, he made the best of things. “I’ve got my camera, I’ve got my microphone,” he recalled thinking. “I am going to turn it into an opportunity to create content for my audience.”
Although Facebook had vaguely alleged that it had taken down the group because of prohibited content, the truth was that the group hadn’t violated Facebook’s rules against incitement to violence, and the platform had no policy forbidding false claims of election fraud. Based on the group’s malignancy, however, Facebook’s Content Policy team had declared a “spirit of the policy” violation, a rare but not unheard-of designation that came down to “because we say so.”
Zuckerberg had accepted the deletion under emergency circumstances, but he didn’t want the Stop the Steal group’s removal to become a precedent for a backdoor ban on false election claims. During the run-up to Election Day, Facebook had removed only lies about the actual voting process—stuff like “Democrats vote on Wednesday” and “People with outstanding parking tickets can’t go to the polls.” Noting the thin distinction between the claim that votes wouldn’t be counted and that they wouldn’t be counted accurately, Samidh Chakrabarti, the head of Facebook’s civic-integrity team, had pushed to take at least some action against baseless election fraud claims.
Civic hadn’t won that fight, but with the Stop the Steal group spawning dozens of similarly named copycats—some of which also accrued six-figure memberships—the threat of further organized election de-legitimisation efforts was obvious.
Barred from shutting down the new entities, Civic assigned staff to at least study them. Staff also began tracking top de-legitimisation posts, which were earning tens of millions of views, for what one document described as “situational awareness.” A later analysis found that as much as 70 per cent of Stop the Steal content was coming from known “low news ecosystem quality” pages, the commercially driven publishers that Facebook’s News Feed integrity staffers had been trying to fight for years.
Civic had prominent allies in this push for intelligence gathering about these groups, if not for their outright removal. Facebook had officially banned QAnon conspiracy networks and militia groups earlier in the year, and Brian Fishman, Facebook’s counter-terrorism chief, pointed to data showing that Stop the Steal was being heavily driven by the same users enthralled by fantasies of violent insurrection.
But Zuckerberg overruled both Facebook’s Civic team and its head of counter-terrorism. Shortly after the Associated Press called the presidential election for Joe Biden on November 7—the traditional marker for the race being definitively over—Facebook staff lawyer Molly Cutler assembled roughly 15 executives that had been responsible for the company’s election preparation. Citing orders from Zuckerberg, she said the election de-legitimisation monitoring was to immediately stop.
Though Zuckerberg wasn’t there to share his reasoning, Rosen hadn’t shied away from telling Chakrabarti that he agreed with Zuckerberg’s decision—an explanation that Chakrabarti found notable enough to make a record of. He quoted Rosen in a note to the company’s HR department as having told him that monitoring efforts to stop the presidential transition would “ ‘just create momentum and expectation for action’ that he did not support.”
The sense that the company could put the election behind it wasn’t confined to management. Ryan Beiermeister, whose work leading the 2020 Groups Task Force was widely admired within both Civic and the upper ranks of Facebook’s Integrity division, wrote a note memorialising the strategies her team had used to clean up what she called a “powderkeg risk.”
Beiermeister, a recent arrival to Facebook from Palantir, congratulated her team for the “heroic” efforts they made to get Facebook’s senior leadership to sign off on the takedowns of toxic groups. “I truly believe the Group Task Force made the election safer and prevented possible instances of real world violence,” she concluded, congratulating the team’s 30 members for the “transformative impact they had on the Groups ecosystem for this election and beyond.”
Now, with the election crisis seemingly over, Facebook was returning its focus to engagement. The growth-limiting Break the Glass measures were going to have to go.
On November 30, Facebook lifted all demotions of content that de-legitimised the election results. On December 1, the platform restored misinformation-rich news sources to its “Pages You Might Like” recommendations and lifted a virality circuit breaker. It relaxed its suppression of content that promoted violence the day after that, and resumed “Feed boosts for non-recommendable Groups content” on December 7. By December 16, Facebook had removed the caps on the bulk group invitations that had driven Stop the Steal’s growth.
Only later would the company discover that more than 400 groups posting pro-insurrectionist content and false claims of a stolen election were already operating on Facebook when the company lifted its restrictions on bulk invitations. “Almost all of the fastest growing FB Groups were Stop the Steal during the period of their peak growth,” the document noted.
A later examination of the social media habits of people arrested for their actions on January 6 found that many “consumed fringe Facebook content extensively,” much of it coming via their membership in what were sometimes hundreds of political Facebook groups. On average, those groups were posting 23 times a day about civil war or revolution.
Facebook had lowered its defenses in both the metaphorical and technical sense. But not all the degradation of the company’s integrity protections was intentional. On December 17, a data scientist flagged that a system responsible for either deleting or restricting high-profile posts that violated Facebook’s rules had stopped doing so. Colleagues ignored it, assuming that the problem was just a “logging issue”—meaning the system still worked, it just wasn’t recording its actions. On the list of Facebook’s engineering priorities, fixing that didn’t rate.
In fact, the system truly had failed, in early November. Between then and when engineers realized their error in mid-January, the system had given a pass to 3,100 highly viral posts that should have been deleted or labeled “disturbing.”
Glitches like that happened all the time at Facebook. Unfortunately, this one produced an additional 8 billion “regrettable” views globally, instances in which Facebook had shown users content it knew was trouble. The company would later say that only a small minority of the 8 billion “regrettable” content views touched on American politics, and that the mistake was immaterial to subsequent events. A later review of Facebook’s post-election work tartly described the flub as a “low light” of the platform’s 2020 election performance, though the company disputes that it had a meaningful impact. At least 7 billion of the bad content views were international, and of the American material only a portion dealt with politics. Overall, a spokeswoman said, the company remains proud of its pre- and post-election safety work.
Facebook had never gotten out of the red zone on Civic’s chart of election threats. Now, six weeks after the election, the team’s staffers were scattered, Chakrabarti was out, and protections against viral growth risks had been rolled back.
In the days leading up to January 6, the familiar gauges of trouble—hate speech, inflammatory content, and fact-checked misinformation—were again ticking up. Why wasn’t hard to guess. Control of the Senate depended on a Georgia runoff election scheduled for January 5 and Trump supporters were beginning to gather in Washington, D.C., for the protest that Trump had promised would “be wild!”
The Counter-terrorism team reporting to Brian Fishman was tracking pro-insurrection activity that he considered “really concerning.” By January 5, Facebook was preparing a new crisis coordination team, just in case, but nobody at the company—or anywhere in the country, really—was quite ready for what happened next.
On Jan 6, speaking to a crowd of rowdy supporters, Trump again repeated his claim that he had won the election. And then he directed them toward the Capitol, declaring that, “If you don’t fight like hell, you’re not going to have a country anymore.” Floods of people streamed toward the Capitol and, by 1:00 p.m., rioters had broken through the outer barriers around the building.
Fishman, out taking a walk at the time, sprinted home, according to a later interview with the January 6 Committee. It was time to start flipping those switches again. But restoring the safeguards that Facebook had eliminated just a month earlier came too late to keep the peace at Facebook, or anywhere else. Integrity dashboards reflected the country’s social fabric rending in real time, with reports of false news quadrupling and calls for violence up tenfold since the morning. On Instagram, views of content from what Facebook called “zero trust” countries were up sharply, suggesting hostile entities overseas were jumping into the fray in an effort to stir up additional strife.
Temperatures were rising on Workplace, too. For those on the front lines of the company’s response, the initial silence from Facebook’s leadership was deafening.
“Hang in there everyone,” wrote Mike Schroepfer, the chief technology officer, saying company leaders were working out how to “allow for peaceful discussion and organising but not calls for violence.”
“All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence?” an employee snapped back, one of many unhappy responses that together drew hundreds of likes from irate colleagues. “We’ve been fueling this fire for a long time and we shouldn’t be surprised that it’s now out of control.”
Shortly after 2:00 p.m., rioters entered the Capitol. By 2:20 p.m., the building was in lockdown.
Several hours passed before Facebook’s leadership took their first public steps, removing two of Trump’s posts. Privately, the company revisited its determination that Washington, D.C., was at “temporarily heightened risk of political violence.” Now the geographic area at risk was the entire United States.
As rioters entered the Senate chamber and offices around the building, while members of Congress donned gas masks and hid where they could, Facebook kept tweaking the platform in ways that might calm things down, going well past the set of Break the Glass interventions that it had rolled out in November. Along with additional measures to slow virality, the company ceased auto-deleting the slur “white trash,” which was being used quite a bit as photos of colorfully dressed insurrectionists roaming the Capitol went viral. Facebook had bigger fish to fry than defending rioters from reverse racism.
Enforcement operations teams were given a freer hand, too, but it wasn’t enough. Everything was going to have to be put on the table, including the near-inviolability of Trump’s right to use the platform. As evening descended on D.C., Trump released a video on the advice of several advisers, who pitched it as an attempt to calm tensions. “We have to have peace, so go home,” the embattled president said. But he couched it in further declarations that the election had been stolen and also addressed the rioters, saying, “We love you. You’re very special.” Facebook joined YouTube and Twitter in taking it down, and then suspended his account for 24 hours. (It would go on to extend the ban through Biden’s inauguration, scheduled for January 20, before deciding to boot him from the platform indefinitely.)
Zuckerberg remained silent through January 6, leaving Schroepfer to calm tensions the following morning. “It’s worth stepping back and remembering that this is truly unprecedented,” he wrote on Workplace. “Not sure I know the exact right set of answers but we have been changing and adapting every day—including yesterday.”
More would yet be needed. While the company was restricting its platform in ways it had never attempted in a developed market, it wasn’t enough to suppress “Stop the Steal,” a phrase and concept that continued to surge. The Public Policy team, backed by Facebook’s leadership, had long held that the company not remove, or even downrank, content unless it was highly confident that it violated the rules. That worked to the advantage of those deploying #StopTheSteal. Having ruled that the claims of a stolen election weren’t inherently harmful, Facebook had left the groups as a whole alone after taking down the Kremers’ original group.
They were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence and misinformation.”Only after the fires were out in the Capitol, and five people were dead, did Facebook realize how badly it had gotten played. A core group of coordinated extremists had hyperactively posted, commented, and re-shared their movement into existence.
Stop the Steal wasn’t just another hashtag. It was a “rallying point around which a movement of violent election de-legitimisation could coalesce,” a later review said. It also gently suggested who at the company was to blame: leadership and Public Policy. “Seams” in platform rules had allowed “the larger wave of the movement seeping through the cracks,” the review found.
There was no time to point fingers at the C-suite, or anywhere else for that matter. #StopTheSteal was surging in the wake of January 6. No sooner had Integrity teams nuked the hashtag and mapped out networks of advocates using it than they identified a new threat: the same insurrectionist community was uniting to take another shot. The new rallying point was the “Patriot Party,” which pitched itself as a far-right, Trump-supporting alternative to the Republican Party.
In a gift to Facebook investigators racing to track them, those organizing the new “party” gathered in private, admin-only groups on the platform, essentially providing a roadmap to their central leadership. What Facebook would do with that information, however, was an open question.
Under normal circumstances, Facebook investigators would spend weeks compiling a dossier on the misbehavior of each individual entity it wanted to shut down—and then try to get a mass take-down approved by Facebook’s Public Policy team. This time, however, nobody in leadership was worried about over-enforcement.
Facebook’s new crisis-time approach was the tried-and-true method that bartenders reserve for rowdy drunks: Stop serving them and throw the bastards out. The company began heavily down-ranking the term “Patriot Party” across Facebook and Instagram and started summarily deleting the central nodes of the network promoting it. Ali Alexander’s accounts were dead meat. So was Brandon Straka’s “Walk Away” movement, as well as those of numerous other Stop the Steal influencers.
The combat between Facebook and a movement it helped birth lasted for weeks. When the company’s internal state of emergency was finally lowered from its highest level on January 22, two days after Biden’s inauguration, there was no question that the tactics had succeeded. With the flip of a few switches, the megaphone the platform had given the insurrectionists was ripped from their hands.
“We were able to nip terms like ‘Patriot Party’ in the bud,” the later review noted.
Blocking the insurrectionists’ attempts to regroup was an accomplishment, though one tempered with apprehension for many former Civic staffers. On leadership’s orders, they had just smothered an attempt to organize what might have become a new political party. Ironically, Zuckerberg had earlier rejected much less heavy-handed efforts to make the platform more stable on the grounds that they restricted user voice.
In the immediate aftermath of January 6, Facebook kept its head down. When company sources and Facebook Communications staff returned reporters’ phone calls or answered emails, which was rare, they acknowledged a good deal of soul-searching. The closest thing to a defense of the company’s performance that anyone offered up was noting that the principal responsibility for the assault on the Capitol belonged to Trump.
On January 7, as most of the world was trying to figure out what the hell had just happened, a brief essay titled “Demand Side Problems” appeared on a blog where Andrew Bosworth posted his thoughts about philosophy and leadership. His thesis was that Facebook’s users had the same insatiable desire for hate as Americans had for narcotics. Therefore, Facebook’s efforts to suppress hate, while well intentioned, would fail just like the “war on drugs” had—at least “until we make more social progress as a society.”
Nobody took notice—this was a Facebook executive’s personal philosophy blog, after all—but the post still made quite a statement. A day after a hate- and misinformation-driven riot had shaken democracy, one of Zuckerberg’s top lieutenants was blaming the wickedness of society at large—in other words, Facebook’s users.
The note was an abridged version of a previous Workplace post Bosworth had made, kicking off a debate with fellow employees in which he’d gone further. Cracking down too hard, he wrote, was a bad idea because hate-seeking users would “just satisfy their demand elsewhere.”
Bosworth’s argument contained some nuance: As the entity that oversaw the supply of content, Facebook was obligated “to invest huge amounts” in user safety. And the executive—who oversaw Facebook’s hardware business, not content moderation—said in other notes that he favored more work to address recommendations’ bias toward outrage- and virality-related problems.
But the takeaway from many of the Integrity staffers who read the post was simpler. If people had to be bigots, the company would prefer they be bigots on Facebook.
By the time Facebook’s Capitol riot postmortems got underway, Facebook whistleblower Frances Haugen and Horwitz had been meeting roughly every week in the backyard of the latter's home in Oakland, California. To thwart potential surveillance to the greatest degree possible, cash was paid for a cheap Samsung phone and given it to Haugen, who was referred to as “Sean McCabe.” The name was her choice, borrowed from a friend in the local San Francisco arts scene who had died just after we met. McCabe had been a troublemaker with a taste for excitement, and she was pretty sure he would have approved of what she was doing.
“Sean” was now using the burner phone to take screenshots of files on her work laptop, transferring the images to a computer that she’d bought in order to have a device that had never been connected to the internet.
One of those files was the resulting document of Facebook’s postmortem, titled “Stop the Steal and the Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement.” Many of the people involved in drafting it were former members of Civic. It was illustrated with a cartoon of a dog in a firefighter hat in front of a burning Capitol building. (The hot document would be published by BuzzFeed in April.)
“What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” the report asked. “We’re building tools and protocols and having policy discussions to help us do better next time.”
Less than a year and a half earlier, at Georgetown, Zuckerberg had scolded unnamed social media critics who wanted Facebook to “impose tolerance top-down.” The journey toward progress, he declared, “requires confronting ideas that challenge us.” Facebook was now confronting the possibility that its platforms weren’t destined to produce healthy outcomes after all. A small group of hyperactive users had harnessed Facebook’s own growth-boosting tools to achieve vast distribution of incendiary content.
In the wake of January 6, Haugen had been brought into a working group meant to address “Adversarial Harmful Networks.” It was tasked with identifying the lessons that could be learned from Facebook’s already completed Patriot Party work, which was seen as a template for future actions. After mapping out malignant movements and identifying their leadership structures, Facebook could, as needed, undertake a lightning strike to knock them out, then launch mop-up operations to prevent new leaders from regrouping. She and I jokingly branded this new strategy the “SEAL Team Six approach.”
The team’s official name would undergo many changes, a reflection perhaps of how uncomfortable Facebook was with tackling the toughest issues. “Adversarial Harmful Networks” became “Harmful Topic Communities,” “Non-Recommendable Conspiracy Theories,” “Non-Violating Harmful Narratives,” and—as of the time of writing—“Coordinated Social Harm.”
No matter what the team was called, the strategy was the same. In the wake of Stop the Steal and the Patriot Party, the company had begun pivoting its vast data analysis tools to understanding the inner workings of movements that looked like trouble. That cute corgi pup meant business.
After gathering the behavioral data and activities of 700,000 supporters of Stop the Steal, Facebook mapped out the connections among them and began dividing them into ringleaders (those who created content and strategy), amplifiers (prominent accounts that spread those messages), bridgers (activists with a foot in multiple communities, such as anti-vax and QAnon), and finally “susceptible users” (those whose social circles seemed to be “gateways” to radicalism).
Together, this collection of users added up to an “information corridor.” Messages that originated among a movement’s elite users pulsed through paths connecting Facebook’s users and products, with every re-share, reply, and reaction spreading them to an ever-widening audience. Over time, those hidden vectors had become well-worn, almost reflexive, capable of transmitting increasingly bizarre stuff on an unprecedented scale.
Analysis showed that the spread of hate, violence, and misinformation occurred at significantly higher rates along the Stop the Steal information corridor than on the platform at large. But Facebook wasn’t trying to filter out the bad stuff anymore. Instead, it was looking to identify and jam the transmission lines.
The work involved a lot of statistics and machine learning, but the core principles were not hard to grasp. To kill a movement, the ringleader accounts should be killed all at once, depriving the movement of its brain. Their lieutenants—who would likely try to replace their leaders in a “backlash” against the removals—could be slapped with strict limits on creating new pages, groups, and posts. Amplifiers and bridgers could merely be de-amplified through downranking. And, finally, the company would seek to prevent connections from forming between “susceptible” people, with Facebook’s recommendations actively steering them away from content and users that might take them deeper down whichever rabbit hole they were peering into.
The team ran experiments with these proposals, using a model based on the historical Stop the Steal data. The results suggested that the approach would have worked, kneecapping the movement on Facebook long before January 6. “Information Corridors could have helped us identify the social movement around de-legitimisation from an array of individually noisy text signals,” said one memo, titled “Information Corridors: A Brief Introduction.” Other documents that Haugen shared with me also seemed to suggest that Facebook was convinced it could have headed off “the growth of the election de-legitimizing movements that grew, spread conspiracy, and helped incite the Capitol Insurrection.”
The enthusiasm for the approach was great enough that Facebook created a Dis-aggregating Networks Task Force to take it further. The company had gone from allowing conspiracy-minded groups of all stripes to flourish, to extreme concern over how people were getting sucked in. In one coordinating call, which Haugen participated in, the head of the task force noted that the company had twelve different teams working on methods to not just break up the leadership of harmful movements but inoculate potential followers against them. “They are vulnerable and we need to protect them,” the head of the project declared on the call about “susceptible” audiences.
This concerned Haugen, a progressive Democrat who also had libertarian leanings, the kind of tech employee who made a habit of attending Burning Man. Facebook, she realized, had moved from targeting dangerous actors to targeting dangerous ideas, building systems that could quietly smother a movement in its infancy. She heard echoes of George Orwell’s thought police. To her, this was getting creepy, and unnecessary.
Facebook had years of research showing how it could have changed its platform to make it vastly less useful as an incubator for communities built around violent rhetoric, conspiracy theories, and misinformation. It could avoid killing exponentially growing conspiracy groups if it prevented their members from inviting a thousand strangers to join them in a day. It wouldn’t need to worry so much about “information corridors” re-sharing misinformation endlessly if it capped re-shares, as Civic had long pushed it to do.
But, following a familiar script, the company was unwilling to do anything that would slow down the platform—so it was embarking on a strategy of simply denying virality to hand-picked entities that it feared. And the work was moving ahead at a high speed.
By the spring of 2021, Facebook was experimenting with shutting down “information corridors” of its “harmful topic communities.” It chose as a target Querdenken, a German movement that pushed a conspiracy theory that a Deep State elite, in concert with “the Jews,” was pushing COVID-19 restrictions on an unwitting population. Adherents of the movement had attacked and injured police at anti-lockdown protests, but the largest Querdenken page on Facebook still had only 26,000 followers. In other words, Querdenken was small, violent, and short of friends in the German government. That made it an excellent guinea pig.
Facebook wasn’t that concerned about killing Querdenken. It just wanted to make sure that it could. As it prepared to run the experiment, Facebook divided the movement’s adherents into treatment and control groups, altering their News Feeds accordingly. The plan was to start the work in mid-May.
As it turned out, the company wouldn’t have to wait that long for its next chance to use its new tools.
Zuckerberg vehemently disagreed with people who said that the COVID-19 vaccine was unsafe, but he supported their right to say it, including on Facebook. That had been the CEO’s position since before a vaccine even existed; it was a part of his core philosophy. Back in 2018, the CEO had gone so far as to say the platform shouldn’t take down content that denied the Holocaust because not everyone who posted Holocaust denialism “intended” to. (He later clarified to say he found Holocaust denial “deeply offensive,” and the way he handled the issue angered Sheryl Sandberg, a fellow Jew, who eventually succeeded in persuading him to reverse himself.)
Under Facebook’s policy, health misinformation about COVID-19 was to be removed only if it posed an imminent risk of harm, such as a post telling infected people to drink bleach. “I think that if someone is pointing out a case where a vaccine caused harm, or that they’re worried about it, that’s a difficult thing to say, from my perspective, that you shouldn’t be allowed to express at all,” Zuckerberg had told Axios in a September 2020 interview.
But, early in February 2021, Facebook began to realise that the problem wasn’t that vaccine skeptics were speaking their mind on Facebook. It was how often they were doing it.
A researcher randomly sampled English-language comments containing phrases related to COVID-19 and vaccines. A full two-thirds were anti-vax. The researcher’s memo compared that figure to public polling showing the prevalence of anti-vaccine sentiment in the U.S.—it was a full 40 points lower.
Additional research found that a small number of “big whales” was behind a large portion of all anti-vaccine content on the platform. Of 150,000 posters in Facebook groups that were eventually disabled for COVID-19 misinformation, just 5 percent were producing half of all posts. And just 1,400 users were responsible for inviting half of all members. “We found, like many problems at FB, this is a head-heavy problem with a relatively few number of actors creating a large percentage of the content and growth,” Facebook researchers would later note.
One of the anti-vax brigade’s favored tactics was to piggyback on posts from entities like UNICEF and the World Health Organization encouraging vaccination, which Facebook was promoting free of charge. Anti-vax activists would respond with misinformation or derision in the comments section of these posts, then boost one another’s hostile comments toward the top slot with almost incomprehensible zeal. Some were nearing Facebook’s limits on commenting, which was set at three hundred times per hour. As a result, English-speaking users were encountering vaccine skepticism 775 million times each day.
As with previous malign efforts, such as Russian trolls or Stop the Steal, it was hard to gauge how effective these tactics were in persuading people to avoid the vaccine. But directionally, the effects were clear. People logging onto Facebook’s platforms would log off believing, at the very least, that the vaccine was more controversial than it actually was.
Investigation of the movement uncovered no evidence of inauthentic behaviour or disallowed tactics. That meant it was again time for information corridor work. The team created to fight “Dedicated Vaccine Discouragement Entities” set the goal of limiting the anti-vax activity of the top 0.001 percent of users—a group that turned out to have a meaningful effect on overall discourse.
By early May 2021, as it was nearing the time to launch its Querdenken experiment, which would end up running for a few months, the situation with COVID-19 misinformation had gotten so bad that the company found itself dipping into its Break the Glass measures. As recently as six months earlier, Facebook had hoped it would never need to use those measures in the United States at all. Now it was deploying them for the third time in half a year.
Unlike earlier conflagrations, Facebook couldn’t blame this round on Trump. Since the fall of 2016, the company had, not unreasonably, pointed to the erratic president as the precipitating factor behind fake news, racial division, and election de-legitimisation. He may have unleashed a new vitriol in American politics, but now he was out of office and off the platform. This movement had its own set of originators.
A state of crisis was becoming the norm for Facebook, and the Integrity team’s approach to its work was beginning to reflect that. Facebook started working on building a “kill switch” for each of its recommendation systems. Integrity team leaders began to espouse a strategy of “Always-On Product Iteration,” in which every new scramble to contain an escalating crisis would be incorporated into the company’s plans for the next catastrophe.
“Yay for things incubating on Covid and becoming part of our general defense,” a team leader wrote, putting a positive spin on expectations for an increasingly unstable world.
Even as Facebook prepared for virally driven crises to become routine, the company’s leadership was becoming increasingly comfortable absolving its products of responsibility for feeding them. By the spring of 2021, it wasn’t just Boz arguing that January 6 was someone else’s problem. Sandberg suggested that January 6 was “largely organized on platforms that don’t have our abilities to stop hate.” Zuckerberg told Congress that they need not cast blame beyond Trump and the rioters themselves. “The country is deeply divided right now and that is not something that tech alone can fix,” he said. In some instances, the company appears to have publicly cited research in what its own staff had warned were inappropriate ways. A June 2020 review of both internal and external research had warned that the company should avoid arguing that higher rates of polarization among the elderly—the demographic that used social media least—was proof that Facebook wasn’t causing polarization.
Though the argument was favorable to Facebook, researchers wrote, Nick Clegg should avoid citing it in an upcoming opinion piece because “internal research points to an opposite conclusion.” Facebook, it turned out, fed false information to senior citizens at such a massive rate that they consumed far more of it despite spending less time on the platform. Rather than vindicating Facebook, the researchers wrote, “the stronger growth of polarization for older users may be driven in part by Facebook use.”
All the researchers wanted was for executives to avoid parroting a claim that Facebook knew to be wrong, but they didn’t get their wish. The company says the argument never reached Clegg. When he published a March 31, 2021, Medium essay titled “You and the Algorithm: It Takes Two to Tango,” he cited the internally debunked claim among the “credible recent studies” disproving that “we have simply been manipulated by machines all along.” (The company would later say that the appropriate takeaway from Clegg’s essay on polarization was that “research on the topic is mixed.”)
Such bad-faith arguments sat poorly with researchers who had worked on polarization and analyses of Stop the Steal, but Clegg was a former politician hired to defend Facebook, after all. The real shock came from an internally published research review written by Chris Cox.
Titled “What We Know About Polarization,” the April 2021 Workplace memo noted that the subject remained “an albatross public narrative,” with Facebook accused of “driving societies into contexts where they can’t trust each other, can’t share common ground, can’t have conversations about issues, and can’t share a common view on reality.”
But Cox and his coauthor, Facebook Research head Pratiti Raychoudhury, were happy to report that a thorough review of the available evidence showed that this “media narrative” was unfounded. The evidence that social media played a contributing role in polarization, they wrote, was “mixed at best.” Though Facebook likely wasn’t at fault, Cox and Raychoudhury wrote, the company was still trying to help, in part by encouraging people to join Facebook groups. “We believe that groups are on balance a positive, depolarizing force,” the review stated.
The writeup was remarkable for its choice of sources. Cox’s note cited stories by New York Times columnists David Brooks and Ezra Klein alongside early publicly released Facebook research that the company’s own staff had concluded was no longer accurate. At the same time, it omitted the company’s past conclusions, affirmed in another literature review just ten months before, that Facebook’s recommendation systems encouraged bombastic rhetoric from publishers and politicians, as well as previous work finding that seeing vicious posts made users report “more anger towards people with different social, political, or cultural beliefs.” While nobody could reliably say how Facebook altered users’ off-platform behavior, how the company shaped their social media activity was accepted fact. “The more misinformation a person is exposed to on Instagram the more trust they have in the information they see on Instagram,” company researchers had concluded in late 2020.
In a statement, the company called the presentation “comprehensive” and noted that partisan divisions in society arose “long before platforms like Facebook even existed.” For staffers that Cox had once assigned to work on addressing known problems of polarization, his note was a punch to the gut. Their patron—someone who had read their own far more rigorous research reviews, been briefed on analyses and experiments, and championed their plans to address the design flaws of groups—was saying that the problem they were assigned to was as real a threat as werewolf attacks.
“We had all celebrated when Cox came back. Before he left he had been a counterweight in the org, someone who’d say, ‘This is not something I think we should be doing,’ ” recalled one director, who cited Cox as his inspiration for standing up to the company’s leadership in ways that damaged his career. “I may have gotten my estimation of the man wrong.”
Excerpted from Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets by Jeff Horwitz. Published by Doubleday. Copyright © 2023 by Jeff Horwitz. All rights reserved.