The Coming Age of AI Government

by | Jul 30, 2025

The Coming Age of AI Government

by | Jul 30, 2025

ai optimized regulatory compliance abstract concept vector illustration.

The rapid growth of artificial intelligence (AI) technology, sometimes more accurately called “Language Learning Machines” is changing the world around us at a scary and unprecedented pace. While there are many potential benefits, the dangers are much more in focus.

It was recently announced that xAI’s Grok landed a major contract with the Department of Defense, despite a variety of alarming behavior displayed by this particular chatbot. Rolling Stone gave us the truly incredible headline, “Grok Rolls Out Pornographic Anime Companion, Lands Department of Defense Contract: Meanwhile, the most advanced version of the AI chatbot from Elon Musk’s xAI is still identifying as Adolf Hitler.” That doesn’t even get into the fact that it has continued to post graphic, violent sexual fantasies about the liberal Twitter personality Will Stancil. While this is all amusing in the abstract, it is inevitable that this technology will increasingly be used by governments, and despite typical assurances about “guard rails,” there is every reason to believe that flawed machines will take ever more control over our governments and lives in both annoying and terrifying ways.

Libertarians, in general, tend to have an ambivalent view of technological advances, recognizing both the ways in which technology can help the public communicate and become less dependent on government, while also being rightly paranoid about how government will use technology. It is true that AI can improve government efficiency, and it does feel unwise to refuse to use it entirely. I certainly wouldn’t be shedding any tears if some low level government paper pushers lose their jobs to AI. (Athough the reality is that the number of bureaucrats has massively increased since the era when clerical work as all done on paper, and computers have demonstrably not reduced their number.)

However, there are a few problems here. The biggest is what happens when AI decides something and no human will second guess it? We keep hearing about how students are rapidly losing the ability to write; it’s perhaps not that far away before government agents forget how to do their jobs without it (insofar as they ever knew how to do them). We will be at the mercy of whatever AI decides without even the shallow comfort that it is a human acting as part of the government machine instead of an actual government machine.

Looking back at the Edward Snowden leaks brings the dangers of AI into focus. What we learned at the time is that the U.S. government was collecting far more information than it could ever use. Even if they searched the data, there was so much of it that it was hard to find anything unless it was very specific and they were acting on other information. AI has the ability to recognize patterns and draw conclusions—faulty though they may be—and this technology suddenly makes putting that data to use much more viable. The government is surely already using AI to go through its surveillance data; we just haven’t learned about it yet, and we have no way of knowing how widespread the practice is, what it’s looking for, or what discretion there is before an American is targeted. One other obvious application for this type of use is AI auditing all tax returns all the time, instead of having some “luck of the draw” that you don’t get flagged. Today the process is necessarily limited by how many returns IRS employees can look over, with a preference given to high earners with more complicated taxes.

None of this yet gets to the most terrifying part, which is the way that this will impact warfare, be it against us or foreign enemies. Israel is already using an AI machine called “Lavender” to select targets in its ongoing genocide in Gaza, as was exposed in a +972 article last year. Israel has extremely loose targeting guidelines, according to the article 15-20 “collateral damage” casualties per low-ranking militant (previously no collateral damage was allowed for low-ranking militants). Though they insist that each strike requires human approval, the reality is that there is no requirement or expectation that humans independently review the analysis despite that it’s been shown the system is only 90% accurate at identifying militants; the “safeguard” is simply that a human pulls the trigger after the machine tells him to. It could be argued that this is a relative improvement to indiscriminate bombing and that they are using technology to do a better job of identifying and targeting militants, but no part of this seems like it involves any great amount of discretion, by man or machine.

The above is what we know to already be happening, but this could become much more scary very quickly. Imagine the results when you put a few facts together: the United Arab Emirates has a rapidly growing commercial empire in Africa with a light footprint, a burgeoning drone industry, and were just granted access to specialized AI technology from the United States that is generally banned from being exported. What this means is that, presumably with existing technology and resources, the Emirates could create a fleet of AI drones to protect their ships, mines, or other interests, operating in remote areas autonomous from human control. It is hard to believe any “guardrail” argument here; the Emiratis are ruthless absolute monarchs and impervious to most concerns about public opinion. Lethal autonomous drones could be operating in Africa in the near future and there is nothing anyone can do about it—and of course it is easy enough to lie and claim they are human-controlled. Further, there’s the dangers of them working as intended, not if any of them choose to become “MechaHitler” like Grok; one malfunctioning update could cause such drones to “kill anything that moves.” Surely any advances made by our ally the UAE will be exported back to the United States, and it seems the only thing between us and a fleet of small autonomous drones attacking our cities like in an alien invasion movie is that thus far no one has decided to do it.

The rise of AI is as inevitable as it is terrifying. Governments will increasingly incorporate it into their operations. While it may lead to some efficiency increases, the dangers are enormous and surveillance capabilities will be unprecedented. Anything meant to protect our rights is unlikely to be respected. We know from how far tech companies went tuning algorithms to control thought that the tech magnates are no friends of freedom, don’t stand up for the government, and think nothing of trying to alter our perspective with their machines: they will not show caution about the growth of AI. On top of this, our politicians are corrupt and incompetent while our bureaucrats are meddlesome but also lazy and dull-witted.

There is a perfect storm coming that will bring AI dominance in government and the future looks like a robot’s boot stamping on a human face for all eternity. If anyone has a reasonable idea for how to stop this, I have not heard it.

Brad Pearce

Brad Pearce writes The Wayward Rabbler on Substack. He lives in eastern Washington with his wife and daughter. Brad's main interest is the way government and media narratives shape the public's understanding of the world and generate support for insane and destructive policies.

View all posts

Our Books

Shop books published by the Libertarian Institute.

libetarian institute longsleeve shirt

Our Books

cb0cb1ef 3fcb 417d 80d8 4eef7bbd8290

Recent Articles

Recent

400 Ph.D. Economists vs. One Shiny Rock

400 Ph.D. Economists vs. One Shiny Rock

On July 21, Treasury Secretary Scott Bessent called for a full review of the Federal Reserve system. He said on CNBC’s Squawk Box, “I think what we need to do is examine the entire Federal Reserve institution and whether they have been successful.” It’s a completely...

read more

Pin It on Pinterest

Share This