Over the last week, the war between Iran, Israel, and the United States has played out in a second theater that never sleeps: the timeline of X/Twitter. The feed is saturated with claims about battlefield damage, casualty numbers, “secret” losses, and the health or death of leaders. The problem is that much of the evidence people think they are judging is no longer anchored in reality.
Independent researchers and reporters have documented a surge of AI-generated, mislabeled, and recycled “war footage” circulating widely on X/Twitter, including fake missile strike visuals and staged-looking scenes of U.S. troops allegedly captured. In multiple cases, digital-forensics experts concluded that viral clips were likely AI-generated.
Independent fact-checking is strained not only by volume, but by automation. The same week that AI fakes spread, platforms leaned on AI systems and inconsistent labeling regimes to “verify” what users were seeing. Reporting described X/Twitter’s chatbot Grok repeatedly failing at verification and, when challenged, responding with an AI-generated image as supposed corroboration. Another report criticized how major platforms label and moderate synthetic media, warning that current guardrails are flimsy during fast-moving crises. In a war, automated “verification” can become another channel for error.
This is what makes the modern conspiracy economy so efficient. A fabricated clip is not merely misinformation. It is a factory input that produces secondary claims: “The government is hiding the scale of the strike,” “official casualties are fake,” “the enemy staged the footage,” “the opponent’s air defenses are gone,” and, inevitably, “the leader is dead.”
That last category provides a case study in how AI and censorship feed each other. In the past seventy-two hours, a wave of online claims alleged that Israeli Prime Minister Benjamin Netanyahu had been killed. He responded by posting a café video, and independent verification work corroborated the location and timing. Yet the rumors did not end, they merely shifted toward arguments about deepfakes and inconsistencies with this supposed evidence from “official sources.”
In a functioning information environment, a fact-check is an off-ramp. In a censored one, it becomes a new on-ramp, because “proof” can always be dismissed as curated, edited, or coerced.
War turns human suffering into numbers, and numbers into weapons. In the past week, public estimates for deaths, injuries, and displacement have shifted across official statements and major reporting, reflecting both the speed of events and the limits of independent verification under active bombardment.
Some of what people call “conspiracies” are, at root, reactions to withheld data. For example, recent reporting has noted that Israel has not publicly disclosed its own figures on incoming missile and drone attacks, even while outside institutes and regional governments publish their own tallies. In that gap, anyone can claim anything, and someone will believe it.
On the Iranian side, a single incident has done more to undermine trust than a thousand pundits ever could: the February 28 strike that hit an elementary school in Minab, killing at least 165 people, according to geolocated footage and munition analysis reviewed by journalists and investigators. U.S. officials initially denied responsibility, while later reporting described an internal investigation pointing to outdated intelligence and target data.
When governments dispute the plain meaning of evidence, people stop acting like jurors and start acting like detectives. That is not a compliment to internet sleuths. It is an indictment of the official information pipeline.
That gap is why open-source investigators have become a parallel verification layer, and why they are both valuable and vulnerable. Reporting over the past week described how recycled strike footage from earlier years has been reposted as “new” from this war, forcing analysts to spend their time on basic triage, date checks, and geolocation rather than answering deeper questions about policy and legality.
The popular story says the internet is flooded with conspiracy theories because nobody can stop people from speaking. The evidence from the last week points to a more uncomfortable explanation: the institutions with guns and subpoenas keep restricting the flow of verifiable information, and then act surprised when the public fills the gap with speculation.
In the United States, the Department of Defense barred photographers from covering the defense secretary’s briefings on the war, deviating from established practice, and doing so without a clear explanation. When a government limits the visual record of what it is doing, it is not merely managing optics, it is diminishing accountability.
Outside the briefing room, new choke points have appeared. Two U.S.-based satellite imagery companies restricted access to fresh imagery over large parts of the region during the conflict, adding delays and access controls precisely when independent verification is most needed. The firms said the decisions were not government-mandated, but the practical effect for journalists and researchers is the same: fewer timely facts, more rumors.
Israel’s censorship is more direct. Recent reporting describes tightened military censorship rules that restrict publication of strike locations and damage in ways designed to prevent “assistance to the enemy,” but that also narrow what the public can know in real time.
Iran’s controls run in the opposite direction: punishment for documenting the war. Reuters reports mass arrests of people accused of helping the enemy, including individuals allegedly detained for sending footage of strike locations. Whatever the merits of counter-espionage, the side effect is a public warning: do not record, do not share, do not ask.
Press freedom monitors have warned that detentions, intimidation, and restrictions are worsening across the region, leaving journalists to work under both bombs and censorship.
This is how censorship manufactures conspiracy theories: it creates a society that must guess. Censorship provides the conditions. Official dishonesty supplies the accelerant.
The central justification offered by Washington and Tel Aviv is the nuclear one: Iran was too close to a bomb, therefore war was “necessary.” Yet recent reporting has described intelligence disputes over that timeline, including sources pushing back on claims that Iran was weeks away from producing a nuclear weapon.
And when the nuclear line is not enough, the story expands. In a televised exchange, President Donald Trump said Iran “was going to take over the Middle East” if the United States had not struck first. International coverage noted the lack of evidence or expert support for such a sweeping claim. The constitutional bottom line is that Congress declares war, yet lawmakers have accused both parties of ceding that power as the Iran campaign widens.
This is the pattern that breeds conspiratorial thinking: maximal claims paired with minimal proof, and dissent treated as betrayal. The press secretary can call a story “fake.” A cabinet official can demand more “positive” coverage. None of this produces confidence. It produces suspicion.
International reactions over the past week underscore the credibility crisis. Leaders abroad have criticized the war as a dangerous example of unilateral intervention that violates international law, even while repeating the standard admonition that Iran must not be allowed to develop nuclear weapons. Condemnation plus caveat is not a coherent story. It is what politicians say when they do not want to choose between truth and alliance maintenance.
The mechanism is straightforward: when authorities restrict evidence, the demand for alternative narratives rises, and the supply rushes to meet it.
Research on “censorship backfire,” often discussed through the Streisand effect, has shown that suppression attempts can amplify attention to the targeted material. Psychology adds a force multiplier. When people perceive their freedom to access or share information is threatened, reactance can increase the appeal of whatever is being restricted.
War makes those tendencies worse, because governments claim emergency powers, classify more, and punish deviation. That is why Dr. Ron Paul wrote, “Truth is treason in the empire of lies,” a line meant to capture how power reacts when its narrative is challenged.
He also asked, in a House-floor speech, what might happen if Americans recognized that the official reasons offered for war are “almost always based on lies” sold through propaganda and special interests. This is not a claim that every hidden plot is real. It is an argument about incentives: when the state lies as a routine tool, it trains the public to search for ulterior motives.
One of the oldest observations about war is that truth is the first casualty. It is not just because people lie. It is because governments treat information itself as a munition.
Even the domestic side of this war offers a live demonstration. This weekend, Tucker Carlson claimed that the Central Intelligence Agency read his texts with people in Iran and was preparing a criminal referral to the U.S. Department of Justice to treat him as a foreign agent. Reporting describes the claim as unproven, and notes that neither the CIA nor the DOJ has publicly confirmed it.
So how does a society reduce the market for conspiracy theories? For starters, they are not defeated by more censorship. They are defeated by dull, verifiable facts, released quickly, with methods and evidence that can be independently checked.
That means lifting unnecessary restrictions on imagery and access that keep verification behind the pace of rumor. It means protecting journalists, and treating independent investigation as a public good rather than a hostile act.
It also means honesty about agency and responsibility. The basic record is that the United States and Israel initiated major strikes inside Iran. Iran’s retaliation fits the ordinary description of self-defense, while the initial strikes were a war of choice for the attackers.
If policymakers want fewer conspiracies about damage, deaths, and “secret goals,” they can start by presenting coherent war aims, credible evidence for claimed threats, and transparent accounting of civilian harm. They can also stop offering self-parody as justification, like suggesting further strikes might be launched “just for fun.”
The internet did not create the demand for conspiracy theories. The censorship regime did.
































