Participation Trophies and Million-Dollar Verdicts
Eric Schmitt inflates a consent decree, Meta loses two key cases, and the Oversight Board weighs in on Community Notes: a whirlwind three days for social media governance.
TL;DR: In just three days, two social media “censorship” decisions, two major jury verdicts on platform design harms, and a cautious Meta Oversight Board advisory opinion highlighted the limits of framing every platform issue as related to speech—while spotlighting accountability questions around product design and user safety.
For years, the loudest political argument about social media has focused on “censorship.” The term became a heavily-leveraged thought-terminating cliche: moderation is censorship, labels are censorship, transparency is censorship, government criticism of viral rumors is censorship, not nudifying women and children is censorship, researchers studying platforms are censorship. It was less a legal argument than a power word—a way of making any attempt to mitigate or even describe platform harms sound illegitimate. Election deniers like Jim Jordan, Stephen Miller, and then-Attorney General of Missouri Eric Schmitt used it to great effect.
This week, courts illustrated the problem with that framing from a few different angles, and the Meta Oversight Board also tried to inject nuance into the conversation. Let’s go over the whirlwind set of decisions…
The “Censorship” Cases: Narrative Wins Over Legal Substance
Elon Musk’s GARM antitrust lawsuit was dismissed: “Censorship industrial complex” cases have been winding through the courts for several years now. Elon Musk’s argument that advertisers boycotting X amounted to an antitrust conspiracy was roundly rejected by a Texas judge, who dismissed the case against the Global Alliance for Responsible Media with prejudice. But GARM had already dissolved under the political pressure nearly two years earlier, when the World Federation of Advertisers said the allegations had “significantly drained its resources and finances.” The legal vindication arrived long after the lawsuit had achieved its actual objective.
Murthy v. Missouri got a consent decree: Eric Schmitt, now a senator, previously the attorney general of Missouri, eked out a consent decree in a case that SCOTUS had tossed back to lower courts because the plaintiffs lacked standing (Justice Amy Coney Barrett also noted “clearly erroneous” lower court findings). The Trump Administration chose to settle; Schmitt framed this gift from his political allies as a “historic” First Amendment victory. His press release declared that the settlement proves the federal government censored Americans’ speech and that Missouri “won big.”
Schmitt struggles with honest representations of reality; the actual decree says nothing of the sort. It is a settlement reached “without further litigation” and explicitly states that it “shall not be construed as evidence or as an admission” of the allegations. It binds only the Surgeon General, CDC, and CISA. Relief is limited to the plaintiffs’ own social media content. It explicitly preserves the government’s ability to provide information to platforms and to express that posts are inaccurate or contrary to the administration’s views, so long as those statements are not coupled with a threat of punishment. And it is enforceable only by the parties or their successors; no one else gets to wield it as some universal charter of online speech rights.
So what, exactly, is Schmitt celebrating? A participation trophy. When litigation functions as narrative warfare, the filing itself is the win. Right-wing media and the Twitter Files boys quickly knocked out victory-lap stories, most of which incorrectly attributed the (bad) Hunter Biden laptop moderation call and 2020 election rumor moderation to the Biden Censorship Regime—before Biden had even been elected, when Trump appointees ran the government. “Who was president in 2020?” remains the question of our time.
Platform Design on Trial: Two Jury Verdicts Focus on Harm, Not Speech
Meta, meanwhile, lost two cases that highlighted the fact that there are questions about social media’s impact that are not about user speech at all.
In Los Angeles, a jury found Meta and Google liable in a youth social media addiction case, awarding $6 million in damages after finding negligence and failure to warn. The plaintiff’s case focused on platform design rather than content: features like infinite scroll, autoplay, and intermittent reinforcement through notifications.
In New Mexico, a jury found Meta violated state law by misleading users about the safety of its products. The New Mexico Department of Justice said the evidence, which came in part from an operation it ran called Operation MetaPhile, showed that Meta’s design features enabled predators to engage in child sexual exploitation, that it knew this, and chose to ignore it while promoting the platform as safe for kids. The jury ordered Meta to pay $375 million.
These cases are not about whether a label (adding more speech!) is actually “suppression,” or whether a takedown request is jawboning. They are about something more concrete and, for Meta, more dangerous: whether the company designed harmful products while telling the public something much more reassuring than its own internal evidence supported.
Meta’s Oversight Board on Community Notes: Caution Over Enthusiasm
Finally, today, we also got a policy advisory opinion from Meta Oversight Board on another product-related question: Meta had asked the Oversight Board for guidance on how to expand its Community Notes outside the United States. The company had already ended its U.S. third-party fact-checking program in January 2025 with a political statement from Mark Zuckerberg about how biased the fact-checkers were; Community Notes was the replacement. When it went to the Board asking about how it should evaluate where it was appropriate to roll out globally, Meta described the program as still in an “early stage of product development,” with only “limited data from the U.S. beta rollout.” That was an understatement: where Twitter’s Birdwatch program launched with full transparency about the number of participants, the text of the notes, and what “cleared” and showed up attached to misleading posts, Meta has revealed very little.
The Board’s answer was revealing. Yes, it said, Community Notes could enhance expression and improve discourse if they operate with enough scale, speed, and safeguards. But it also warned that in repressive regimes, during elections, and in crises and conflicts, the system could create serious risks rather than mitigate them. It warned that coordinated disinformation networks could game the process. It warned that dominant political, ethnic, or linguistic groups could drown out minorities. And, most notably, it said that delays in note publication, the small number of published notes, and dependence on the reliability of the surrounding information environment raise “serious doubts” about whether Notes can meaningfully address harmful misinformation.
I am a supporter of Community Notes in general. The bridging-based consensus requirement — the idea that a note only surfaces when people who usually disagree converge on finding it helpful — increases the perception of legitimacy, particularly among those who distrust centralized fact-checkers. Research shows that notes, once attached, do reduce engagement with labeled content, sometimes substantially. The open-source algorithm and public dataset that X built the program on gave it an unusual foundation of auditability. And LLMs show significant promise as Notes contributors, something X allows while other platforms still do not.
But we need to be operating in the realm of reality about where the program stands. The bridging requirement that makes Community Notes credible also means that the most controversial content—the stuff where notes would matter most—is the least likely to get a note at all. The system is vulnerable to coordinated manipulation, such as organized campaigns to mass-downrate accurate notes. Note authors rely heavily on links to media organizations, Wikipedia, and—ironically—fact-checkers for their sourcing. Rolling out Community Notes on Meta in lieu of fact-checking partnerships risks making the former less effective by starving contributors of the very links they depend on.
And then there’s the question of who does this work and why. Meta spent over $100 million on professional fact-checking partnerships since 2016. Ending the program and shifting to Community Notes doesn’t make that work go away; it simply reassigns it to volunteers. Researchers have described this as “data labor”—unpaid annotation that directly improves a platform’s products while the platform simultaneously invests in creator monetization and AI-generated content that generates the very misinformation contributors are being asked to correct for free. I’ve been participating in Meta’s program as a note rater and writer. Anecdotally, notes take days to clear, and most of what I’ve been voting on doesn’t really need a note — they’re being used for dunking. It’s hard to imagine many people feeling inclined to write Notes for free for a massive company that is doing this in part to save money. This doesn’t mean the program should be scrapped, but it does mean that incentives need to be better aligned.
Alexios Mantzarlis and I raised many of these concerns in a joint public comment to the Oversight Board when the original call went out. In it, we argued that Meta had asked the Board to bless the international expansion of Community Notes while explicitly telling the Board not to consider product design or how the algorithm operates — the very things that determine whether the system is safe, effective, and compatible with what Meta itself has framed as its human-rights responsibilities. Meta had provided no statistics or data from nine months of operation to help evaluate efficacy. A country-selection framework that does not consider how the product actually works, we wrote, would be rubber-stamping, not oversight.
Meta replaced one system with another, then asked the Oversight Board to help define the conditions under which the replacement might be too weak, too slow, too manipulable, or too context-blind to do the job. Their answer, in essence, was: quite a lot of conditions, actually.
What These Rulings Collectively Reveal
So: two clusters of decisions, all in a span of three days. One addressed the endless theatrical fight about “censorship,” where every act of moderation (or even counterspeech!) is tyrannical authoritarianism, and where a narrow consent decree becomes a triumphal monument to a case the Supreme Court declined to resolve on the merits.
The other cluster interrogated questions about platform products: Two juries said that when harm to users comes from design choices—platform conduct, not content—“everything is speech” is no longer a defense for evading accountability. And Meta’s Oversight Board warned that Community Notes may be too fragile to serve as the company’s primary response to misinformation at this point.
There are thousands of product-liability cases in the queue. Is the terrain shifting? It’s too early to tell; Meta will obviously appeal these verdicts. But the question these juries are now asking is one that censorship entrepreneurs have spent years trying to crowd out: what responsibility do platforms bear for the systems they build, and the risks they understand but choose not to prevent?
Other recent personal things:
In a book review for Lawfare, I delved into the alternate universe of Senator Schmitt (timing was purely coincidental!) It’s worth reading to see just how brazenly he lies about the depositions of Anthony Fauci and FBI Agent Elvis Chan. These things used to be disqualifying for political figures.
I joined Jon Stewart on The Weekly Show podcast (YouTube video here), with Casey Newton, to talk about why there is a perception of mass voter fraud on social media, even though the actual evidence—even from Heritage Foundation! —shows that it’s an almost nonexistent problem.

