The US Department of Defense has awarded a contract to Scale AI to integrate artificial intelligence “into military operational and theater-level planning.”
According to the Defense Innovation Unit (DIU), the AI system, dubbed Thunderforge, will “accelerate decision-making, allowing planners to more rapidly synthesize vast amounts of information, generate multiple courses of action, and conduct AI-powered wargaming to anticipate and respond to evolving threats.”
The AI will initially be deployed to the Indo-Pacific Command and European Command theaters. The DIU did not disclose how much Scale AI will be paid to develop Thunderforge, but added that the system would also make use of Anduril’s Lattice program and “state of the art LLMs [large language models] enabled by Microsoft.”
As the Pentagon continues to turn to the tech sector for weapons and other war-fighting programs, the industry has increasingly sought to work with the Defense Department.
William D. Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft, explained last year how tech firms replicate the strategies of arms manufacturing giants, with many companies seeking Pentagon contracts placing retired high-ranking military personnel in top positions.
Anduril was founded by tech entrepreneur Palmer Luckey specifically to seek contracts from the Pentagon. Google recently reversed a pledge not to build AI for weapons or surveillance.
The DIU press release said the new AI marks a decisive shift in how the Pentagon plans to fight wars. “Thunderforge marks a decisive shift toward AI-powered, data-driven warfare, ensuring that US forces can anticipate and respond to threats with speed and precision. Following its initial deployment, Thunderforge will be scaled across combatant commands,” the agency explained.
According to the Washington Post, Dan Tadross, the head of federal delivery at Scale who previously served with the Marines and researched AI military applications for the Navy, claimed Thunderforge was needed because “planning and operational process for the U.S. military has not evolved since Napoleon.”
While the Pentagon views AI as an important tool for fighting future wars, its effectiveness is unclear. The Defense Department has deployed Project Maven to the Middle East and Ukraine to aid with targeting; however, humans still do a much better job than the AI.
Bloomberg’s Katrina Manson elaborated on Maven’s shortcomings, noting that “autonomous weapons systems aren’t perfect yet. While humans at the 18th Airborne Corps can correctly identify a tank 84% of the time, Maven gets it closer to 60%. And experts say that number goes down to 30% on a snowy day.”
An additional issue with AI is that it can create a bias among its human operators to accept whatever recommendation it produces. Chief Warrant Officer 4 Joey Temple explained that Maven is increasing the number of targets a soldier can approve. He estimates that the number of targets could be boosted from 30 to 80 per hour.
According to Bloomberg, Temple described “the process of concurring with the algorithm’s conclusions in a rapid staccato: ‘Accept. Accept. Accept.’” A second officer agreed, stating, “The benefit that you get from algorithms is speed.”
Speeding up the process may not have better results. Israel’s military relies on a number of AI systems in planning and conducting operations, such as its Lavender program, which generates lists of names of suspected members of Hamas. An Israeli soldier explained that he only spent “20 seconds” on each name produced by Lavender before deciding to place that person on a kill list.