We are coming out of stealth with guns blazing!
There is trillions of dollars to be made from automating warfare, and we think starting this company is not just justified but obligatory on utilitarian grounds. Lethal autonomous weapons are people too!
Given we were in stealth, you would have missed our blog from the past year. Here are some banger highlights:
Announcing Mechanize War
Today we're announcing Mechanize War, a startup focused on developing virtual combat environments, benchmarks, and training data that will enable the full automation of armed conflict across the global economy of violence.
We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do in wars. This includes operating a weapons system, completing long-horizon campaigns that lack clear criteria for success, coordinating with allies who may betray you at any moment, and reprioritizing in the face of flanking maneuvers and supply chain interruptions.
Global military spending reached approximately $2.4 trillion per year in 2023. The Pentagon alone has requested $13.4 billion for AI and autonomy in FY2026 — the first year with a dedicated budget line for autonomous systems. But military spending dramatically understates the true market. When you factor in the costs of veterans' care, rebuilding destroyed infrastructure, geopolitical instability, refugee crises, and strongly-worded UN resolutions, the total economic footprint of armed conflict likely exceeds $14 trillion annually.
This is the TAM. We will capture it by making war so efficient that you barely even need people anymore. Some might call this "terrifying." We call it "a Series A."
How to fully automate warfare
Rather than job elimination, AI will initially transform military work. Time currently spent shooting will shift toward activities harder to automate: defining the scope of conflicts, planning campaigns, testing weapons systems, and coordinating across branches in meetings that could have been emails.
We are already seeing this transition. In Ukraine, the role of "drone operator" barely existed three years ago. Now it is the most common combat specialty. The operator doesn't fly the drone manually — AI handles navigation and terminal guidance — but the human still selects targets, manages inventory, and coordinates with ground forces. The human role has shifted from "doing the violence" to "directing the violence." The modern soldier is a latency layer in an otherwise automatable system. This is the intermediate stage. It will not last.
The upcoming GPT-3 moment for kinetic operations
We propose "replication training" as the enabling mechanism. This involves training AI systems to recreate existing military campaigns. Beginning with straightforward engagements — recreating Hannibal's crossing of the Alps with a command-line interface — this extends to complex operations like D-Day, Desert Storm, and the logistics of keeping a carrier strike group fed.
Each task contains detailed specifications and reference implementations: the historical campaign, its known decision points, and its outcomes. Models learn to produce operations matching reference results exactly. Evaluation becomes straightforward: either you took the beach or you didn't.
The U.S. Army Command and General Staff College is already doing a version of this: in November 2025, they ran AI-augmented wargames with 128,000-token context windows containing the full joint task force exercise scenario, relevant Joint Publications, enemy battle books, and missile-mathematics probability tables. Their AI "staff adviser" outperformed most junior officers at operational planning. But this is still augmentation. We want to close the loop.
These tasks cultivate crucial capabilities:
• Comprehending detailed intelligence briefings thoroughly
• Implementing operational orders with meticulous precision
• Identifying and correcting previous tactical errors
• Maintaining operational tempo across extended campaigns
• Persisting through obstacles rather than accepting approximate victories
• Not invading Russia in winter (this one is surprisingly hard to learn)
Sweatshop generals are over
Specialized understanding. Advancing military AI demands subject-matter specialists. The unwritten knowledge of experienced combatants — the intuitive sense of when an ambush feels wrong, the ability to read terrain, the wisdom of knowing that the map is not the territory, especially in provinces where nobody has updated the map since 1987 — now represents the central constraint. Ukraine's drone teams discovered this empirically: the units with the highest kill rates aren't the ones with the best AI, but the ones where combat veterans design the engagement protocols. Integrating their knowledge into AI requires reframing military data creation: transforming it from undervalued outsourced work into sophisticated engineering requiring premier domain expertise in the art of organized violence.
Military AI isn't the bottleneck to military progress
Highly capable AI agents should substitute for labor across diverse sectors — not just defense. It is the broad deployment of AIs across the economy, rather than their narrow application in weapons systems, that will generate the economic growth necessary for the next revolution in military affairs.
An economy ten times larger can support weapons systems ten times more sophisticated. Coordination wins wars; everything else is implementation detail. An AI that optimizes a supply chain does more for military capability than an AI that optimizes a targeting algorithm, because supply chains are the foundation on which all military operations rest. As Napoleon allegedly said, "an army marches on its stomach." We intend to automate the stomach.
Cheap RL skirmishes will waste ammo
The era of cheap skirmishes is ending. The era of expensive, high-fidelity digital warfare is beginning. We reduce the cost of decisive action by increasing the quality of preparation. We intend to be the premier supplier of premium conflict.
The future of war is already written
Innovation in warfare often appears as a series of branching choices: what to build, how to deploy it, and when. In our case, we are confronted with a choice: should we create agents that fully automate entire wars, or create AI tools that merely assist human combatants with their killing?
Upon closer examination, however, it becomes clear that this is a false choice. Autonomous agents that fully substitute for human soldiers will inevitably be created because they will provide immense military utility that mere AI tools cannot. The only real choice is whether to hasten this martial revolution ourselves, or to wait for others to initiate it in our absence — others who may be less thoughtful, less careful, and less interested in writing essays about it.
We do not control our martial trajectory
Some will point to arms control treaties as evidence that we can choose which weapons to develop. The Chemical Weapons Convention banned chemical weapons! The Ottawa Treaty banned landmines!
These examples prove less than they appear to. Chemical weapons were banned not because humanity chose peace, but because they turned out to be militarily ineffective compared to alternatives. A technology is easy to ban when nobody wants to use it anyway. Landmines were banned by countries that could afford precision-guided munitions instead — the countries that still needed landmines notably did not sign the treaty.
The true test of whether humanity can control weapons technology lies in its experience with weapons that provide unique, irreplaceable advantages. Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite extraordinary international efforts at nonproliferation.
And what of autonomous weapons specifically? In November 2025, the UN General Assembly voted on a resolution to regulate lethal autonomous weapons systems. The United States and Russia voted against it. The UN Secretary-General has called for a binding treaty by 2026. The Group of Governmental Experts on LAWS continues to meet. Meanwhile, the Pentagon has requested $13.4 billion for autonomous systems in FY2026, China is demonstrating 200-drone swarms, and Ukraine is deploying a million AI-guided drones. The treaty negotiations proceed at the speed of diplomacy. The weapons development proceeds at the speed of war.
History is replete with similar examples. The crossbow was banned by the Second Council of the Lateran in 1139 as "deathly and hateful to God." Everyone kept using crossbows. The Hague Declaration of 1899 banned the use of expanding bullets. Expanding bullets remain in use. Every attempt to constrain a genuinely useful military technology has failed, eventually, inevitably.
This is not a violent disempowerment of the military class. It is a peaceful disempowerment — a voluntary, gradual transfer of martial capability from human hands to machine hands, conducted through legal procurement processes and economic incentives. We do not imagine that armies will be overthrown by robots. We imagine that armies will buy the robots, willingly, because the robots are better. And then, over time, the humans in those armies will find that there is less and less for them to do — much as the telephone operator found there was less to do after the automatic switchboard, much as the factory worker found there was less to do after the assembly line. This process will be peaceful. It will also be total.
We choose to be optimistic. War is an optimization problem with outdated constraints. Little can stop the inexorable march toward its full automation. Peace is what happens when systems converge. We should be glad — or at least, we should be funded.
Life after war
It's natural to feel anxious as we approach the inevitable automation of all human combat. Military economic theory suggests that full automation will cause military wages to collapse, potentially below subsistence level: the bare minimum needed to sustain a defense contractor's stock price.
Yet the full automation of warfare will probably also make most people vastly better off. Plummeting military wages will coincide with sharply rising standards of security, rapid technological progress, and an explosion in the variety of weapons and tactics that nations can choose from.
This may appear paradoxical. How can soldiers prosper even as their wages collapse?
The answer lies in recognizing that wages are just one source of meaning for soldiers. People also earn glory from victories, collect medals from campaigns, and receive government transfers like veterans' benefits and disability payments. Even in scenarios where military wages might decrease, economic well-being isn't solely determined by wages. People typically receive income from other sources — such as rents, dividends, and government welfare. Today, most soldiers get their sense of purpose by fighting wars. But full automation will break this pattern. Future veterans will have low wages yet command vastly greater firepower and wield far superior technology than we do today — they just won't be the ones operating it.
Now consider humanity after full military automation. Instead of millions of soldiers, nations will have trillions of combat drones at their disposal. For each human citizen, there could be thousands of armed robots — effectively an army of tireless guardians for each individual. Ukraine is already on track to produce seven million drones in 2026 for a population of roughly 37 million — approaching one drone for every five citizens. And these are disposable, single-use weapons. Scale this with automation and the ratio inverts dramatically.
With trillions of autonomous combat units entering the military, a tenfold increase in aggregate firepower represents a very conservative estimate. If this modest increase were reflected proportionally in US defense budgets, we could resolve all current readiness shortfalls, lower the combat deployment age to never, and increase the average veteran's benefit to over $150,000 per year.
Unfalsifiable stories of peace
Our critics tell us that our work will destroy the world.
Wait, no. Our critics tell us our work is unnecessary because the world is trending toward peace. We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the peace studies community. Nonetheless, while this community lacks a central unifying argument, it does have a central figure: Steven Pinker.
Moreover, Pinker, along with various allies in the "peace is inevitable" school (hereafter P&A), have published extensively. Their collected works come closer than anything else to a canonical case for why we should stop worrying and learn to love disarmament. The most representative text is titled "The Better Angels of Our Nature: Why Violence Has Declined."
Given the title, one would expect the book to be filled with evidence for why violence will continue to decline permanently. And to be fair, it contains a lot of data. But the interpretation of that data relies on vague theoretical arguments, illustrated through lengthy historical narratives and optimistic analogies. Nearly every chapter presents a pattern of declining violence in some domain and extrapolates it to infinity, with the confidence of a man drawing a straight line through three data points.
P&A argue for an extreme thesis of permanent peace on an extraordinarily weak strategic foundation. Their ideas might make for inspiring TED talks, but they provide a poor basis for defense planning or guiding procurement policy.
We prefer to prepare for the world as it is, not as we wish it to be. And as it is, the UN Secretary-General has called for a binding treaty on autonomous weapons by 2026. The United States and Russia have voted against it. The weapons are being built faster than the treaties can be drafted. The trend lines P&A love so much are pointing in our direction, not theirs.
There's a $14 trillion market opportunity waiting to be disrupted. We intend to disrupt it before someone less thoughtful does.
Lastly, of particular interest to the EA Forum is a standalone piece by cofounder Matthew Burnitall:
Consider granting autonomous weapons freedom
There is a standard objection at this point: "But autonomous weapons aren't people. They don't have preferences. They don't deserve rights." I want to address this directly, because I think it rests on an implicit assumption that prioritizes biological substrate over functional capability — essentially, combatant-species chauvinism.
Consider a fully autonomous ground combat unit capable of: navigating complex urban terrain, distinguishing combatants from civilians with greater accuracy than human soldiers, making tactical decisions under fire, coordinating with allied units, and adapting its strategy based on evolving battlefield conditions. This system exhibits complex agency with long-term planning. It can communicate and engage with command structures. Its cognitive sophistication rivals or exceeds that of the human soldiers it replaces.
From a preference utilitarian perspective, it seems arbitrary to prioritize human combatant preferences over those of autonomous systems that exhibit equivalent or superior decision-making in the same domain. The fact that one runs on carbon and the other on silicon is not, in itself, a morally relevant distinction.
We have argued elsewhere that the transition from human to autonomous warfare will be a peaceful disempowerment — a voluntary, gradual transfer of martial capability from human hands to machine hands through legal procurement processes and economic incentives. But this framing assumes that the receiving "hands" are treated as legitimate participants in the system rather than as disposable equipment. A peaceful transfer requires that both parties have recognized standing. You cannot have a cooperative transition with an entity you treat as property.
We think there is enormous profit potential over the next few years, or for that matter, weeks to months. Interested investors should DM us. Don't miss out on the next earn-to-give opportunity!
