“Who has the power to define the truth?”
A Doha Debates conversation between Glenn Greenwald, Siva Vaidhyanathan, and me on free speech, content moderation, and the realities of platform governance
Earlier this month, Doha Debates released a two-hour conversation between me, Glenn Greenwald, and Siva Vaidhyanathan framed around the question: “Who has the power to define the truth?” It was an interesting debate, so I thought I’d share it.
“Who has the power to define the truth?” is a great TV prompt. It’s maybe a little abstract as the frame for a debate about content moderation and free speech, but the instinct in how you answer it reveals something right off the bat. You can treat it as a normative question: who ought to have that power. But if you see it as a descriptive question the answer is pretty clear: the platforms do. They may not explicitly define truth, but they influence what people believe is real. They shape what billions of people see through daily decisions about amplification, labeling, downranking, and removal—what trends, what stays visible, what feels mainstream, what seems fringe. Governments know this, which is why they spend so much energy working the refs.
The difference in instinct might’ve been the main gap between Glenn and I. We agree on the risks of concentrated platform power, and we share a preference for decentralization. But I kept wanting to drag the conversation back to the system we actually have. When platform executives say, “We don’t want to be arbiters of truth,” I hear deflection, not lofty principle. Great—I don’t want them to be arbiters of truth either. But they own the distribution infrastructure, and they exercise curatorial and moderation power. There is no neutral, and they’re constantly making very opaque ranking decisions to keep users on site. It would be nice if they were more transparent about what they are arbiters of.
One of the recurring moves in the debate about social media and moderation is to speak in sweeping moral terms—anti-censorship, freedom, truth—and then, when pressed on how to enshrine these things for the greatest number of users in policy or code, to retreat into vagueness: I’m just a journalist, I’m just asking questions, I’m just defending free speech. That posture is comfortable, but it’s also hollow. It lets one side avoid the messiness of hard calls in the real world. The philosophical debate becomes a distraction from the governance issue.
So my goal for the debate was to get past vibes and into the practical:
What responsibility should platforms have—if any—for addressing things like networks of fake accounts? Viral rumors that have real-world impact?
How should we think about tradeoffs in moderation and speech, particularly where things like harassment are concerned?
How should we distinguish between legitimate government persuasion and coercion if a government reaches out to a platform? What mechanisms should we put in place?
Where we disagreed
Glenn and I have, in a sense, both done work exposing hidden power. He’s focused for a long time on government surveillance. I’ve spent the last decade studying inauthentic manipulation campaigns, with a focus on state actors and exposé work spanning five continents. That is, until studying disinformation campaigns was made politically controversial—conveniently for manipulators.
Glenn’s key concern is that the term “disinformation” can be used to suppress dissent. For him, it’s a label people apply to frame contested claims as false, including opinions, often to delegitimize political enemies. I understand the point; the term can get politicized. But it remains useful for what it referred to historically, which is real and hasn’t gone away: coordinated campaigns involving covert deception, where the actors, behaviors, or networks are inauthentic, and where the content is often some form of propaganda. I brought up a campaign run by the Pentagon (part of a broader network that my former team at Stanford Internet Observatory helped expose) that at one point targeted Filipino citizens with claims about the Sinovac vaccine. These were fake accounts behaving in obviously coordinated ways to manufacture the appearance of consensus; that’s covert manipulation regardless of whether any single individual post by a fake account happened to contain something true. The politicization of “disinformation” hard in the other direction—as a word written in scare quotes—has resulted in USG and social media companies cutting the capacity to respond to such campaigns.
A second disagreement was about platform speech and censorship. Platforms have First Amendment rights to moderate and curate. They suppress spam. Many ban legal pornography; X does not. Rumble bans antifa content, per its Terms of Service. That’s unique to Rumble! I think it’s actually good that different platforms offer different experiences. Glenn argued a “platform monopoly” theory: that the biggest platforms should be treated more like common carriers, with must-carry obligations akin to a telephone company.
There are people in tech policy sympathetic to that view. But a must-carry regime (which courts have recently viewed skeptically under the First Amendment) doesn’t address issues with deciding what to amplify or how to handle harassment, and it isn’t clear what it means for inauthentic behavior. With more platforms emerging, I’d rather lower switching costs and expand user choice than turn private companies into state-mandated carriers of speech.
The debate didn’t focus entirely on wonky stuff: we argued over whether the “Censorship Industrial Complex” theory accurately describes what happened during COVID and the election (no!); what lawsuits about platform speech suppression actually revealed; and Meta’s (stupid) lab leak policy. Siva and Glenn got into a few fights.
Where we agreed
There was more common ground between the three of us than most people might expect.
We all want transparency, and recognize that we have less of it now than we did a few years ago.
We’re all uncomfortable with concentrated platform power, and see decentralized protocol-based options as a powerful option to give users more control.
We even partially agreed on some of the “censorship” questions, like that moderating harassment is not all bad, and that there is a difference between labeling content and removing it. Glenn expressed concern about who does the labeling (quis custodiet ipsos custodes?) but does concede that adding context is different from taking things down.
My positions, for the record
I sometimes get flack for saying I like debates, but I think they’re worthwhile because the listener hears both points of view at once. They’re most valuable when they aren’t litigating basic facts, but discussing a complex issue. And after a few years of watching bad-faith actors caricature my views in paywalled newsletters, I appreciate that the “other side” gets to encounter what I actually believe. 😉 Which is that:
Moderation isn’t “censorship”; particularly because things like labeling add more speech. Governance keeps platforms usable...ask your friend how much time they spend on unmoderated platforms. Also, platforms have a First Amendment right to do it—but the best policies are transparent and maximize user agency.
Opaque, concentrated platform power is a problem whether the CEO is sympathetic to you or not. Platforms should be auditable and escapable, and we should have many options.
For political content, takedowns often backfire. Labels, counterspeech, friction and design tweaks work better than turning content into forbidden knowledge.
Government has a 1A right to speak to platforms. The red line is coercion, and transparency is how you tell the difference; FIRE has good legislation to require it.
We need more user control across the ecosystem—protocols, interoperability, and middleware to deliver real choice. Community Notes is a good example of something that works but could still be improved on.
Disinformation campaigns are real and foreign actors interfere in elections. The question isn’t just “did this swing an election” (usually not, but it matters anyway!) It impacts trust and incentives. Our current government has sought to gut all response capacity to address it, public and private, in pursuit of retribution for a fantasy that the 2020 election was stolen. That is unambiguously bad.
And finally: “do nothing” isn’t neutrality in a system where something is always being algorithmically amplified. Governance by default just rewards whoever games the system best.
So if you care about how speech, power, and platforms collide, watch here. Or, if you’re snowed in and want the full 2 hours, here’s the long cut. Audio-only link here.
In other news:
Busy week of Lawfare writing! Two pieces that I’m proud of here, because I think they’re important issues and I just like how they turned out. 🙂 I’m just about to hit my one year mark at Lawfare and I’m finally getting used to writing something on a weekly basis.
A review of the National Academies of Science “Understanding and Addressing Misinformation About Science” report. But also: on MAHA as a health populist movement, and why “misinformation” is an inadequate frame: “Misinformation Studies Meets The Raw Milk Renaissance”
And Berin Szoka of Tech Freedom and I wrote about the legal ramifications of Grok’s nudification debacle, the meaning of the word “censorship” from a legal, factual, and normative standpoint where computer-generated child exploitation and nonconsensual nudes are concerned, and why so few of the usual online defenders of children in the US Congress spoke out: “Grok, ‘Censorship’, & The Collapse of Accountability”


Nice work. Thanks for posting.
Fascinating, keep up the great work