Synthetic Command: How Autonomous Systems Are Changing Military Hierarchy

Can AI be your XO, your S2, and your fires planner—at the same time?
From kill chain compression in Ukraine to Project Convergence experiments in the Mojave, autonomous systems are taking the wheel in modern warfare. Here’s how synthetic command is shifting military hierarchy, and what it means for your next deployment.

AI Image – Coded Kills

“You don’t need to replace the commander. Just build a better XO who never sleeps.”
– Field Test Officer, NATO ISR-AI Integration Team


Autonomous systems aren’t coming—they’re already here, and they’re learning fast. From AI-enabled kill chains in Ukraine to autonomous sensor-fusion nodes in the Indo-Pacific, the military hierarchy as we know it is undergoing a slow-motion detonation. This isn’t just about drones and targeting algorithms. It’s about the rise of synthetic command elements—AI agents that advise, execute, and sometimes overrule human judgment at the tactical and operational level.

This paper dives deep into how autonomous systems are evolving from tools to teammates. We explore doctrinal implications from FM 3-0, FM 3-90, ATP 3-04.15, and TRADOC Pam 525-3-1, and compare fielded and experimental systems from the U.S., NATO, China, and Russia. We also break down how command authority, mission planning, and kill chain execution are being restructured—sometimes quietly, often chaotically—by AI integration.


Doctrinal Terrain — AI in the Command Post

FM 3-0 (2022) and TRADOC Pam 525-3-1 both anticipate “decision dominance” through multi-domain integration. What they’re quietly prepping us for is something bigger: command augmentation by synthetic systems.

  • Kill Webs, Not Chains: In LSCO, kill chains are too slow. FM 3-90-1 shifts to layered kill webs with AI intermediaries autonomously queuing fires from drones, sensors, and SIGINT feeds.
  • ATP 3-04.15 shows how multifunctional aviation task forces already use AI to deconflict and prioritize ISR feeds from manned and unmanned systems.
  • Commanders don’t “direct” every engagement—they manage an ecosystem of semi-autonomous agents.

“We’re not deciding whether to fire. We’re deciding which AI’s recommendation to follow.”
– U.S. Army Fires Officer, Fort Sill AI/ML Task Force


Authority, Responsibility, and the Synthetic XO

Command has always been about responsibility—about who signs the order that sends steel downrange. But synthetic systems don’t carry rank or liability; they calculate probabilities and prioritize targets faster than any human can. As AI steps into the role of advisor, planner, and sometimes silent veto, the lines between human judgment and algorithmic suggestion blur.

The question isn’t just “What can AI do for us?” but “Who owns the kill when the machine calls the shot?”

U.S. forces treat AI as advisory

Systems like Lattice OS (Anduril) and Firestorm don’t pull the trigger themselves; they build a prioritized target deck, cue sensors, and recommend strikes based on real-time ISR. The human-in-the-loop still confirms the shot, but the clock is ticking, and many commanders report their “decision” is often just a quick validation of what the system already teed up. This keeps humans in control—on paper—but in practice, the AI’s recommendation often becomes the plan.

China, per ATP 7-100.3

The PLA is exploring AI-led wargaming where commanders follow—not supervise—synthetic advisors’ recommendations. Chinese doctrine experiments with AI not just as a planning tool but as a tactical decision-maker, with commanders acting as executors of synthetic advisors’ outputs. In AI-led wargaming, the algorithm may determine optimal maneuver and fires plans, while human commanders simply enact the outputs unless they can clearly justify deviation—flipping traditional command relationships on their head.

Russia, per ATP 7-100.1

The Ruskies have deployed semi-autonomous systems in EW and counter-battery roles but lacks reliable integration across echelons. Russia fields semi-autonomous counter-battery radar and EW systems that can detect and cue fires without higher approval, but their integration across brigade and division levels is inconsistent. Connectivity gaps, organizational friction, and a culture of centralized control prevent these systems from fully leveraging synthetic speed at scale, limiting their effectiveness in rapidly shifting LSCO environments.

“An AI-driven fire plan can hit 20 targets in 2 minutes. A human staff might hit 5. Which do you brief the general on?”
– Fires Planner, V Corps Wargame 2024


Real-World Disruptors — Case Studies from the Field

Theory is one thing. Watching algorithms compress kill chains while humans scramble to keep up is another. From Ukraine’s dirt roads to U.S. wargames in the Mojave, synthetic command isn’t a lab toy anymore—it’s a force multiplier and a friction point all at once. Here’s how real-world fights are showing us what happens when AI steps onto the battlefield, and what it means for those still writing doctrine in the rear.

Ukraine, 2023–2025:

  • Ukrainian forces use GIS Arta and Delta to feed AI-assisted targeting. Civilian drones call in fires faster than Russian BNs can clear fires with top-down approval. Synthetic nodes compress kill chains to under 5 minutes.
  • Russian units relying on centralized approval (despite some AI-enabled SIGINT/ISR) suffer delays and misfires.

U.S. Experiments at NTC & Project Convergence:

  • AI-enabled targeting tools like Firestorm or STITCH significantly outpaced human-only targeting cells in wargames.
  • Friction emerges when commanders override AI recommendations—and lose.

Sociotechnical Shift — What Happens to Command Culture?

  • The platoon leader of the future might spend more time interpreting synthetic advisors than commanding Soldiers.
  • Battle staff roles may bifurcate into human and machine liaisons: “AI Whisperers” who tune model outputs, guardrails, and biases.
  • Decision-making authority may shift from rank to access—whoever holds the interface to the AI node holds the power.

“It’s not rank anymore. It’s who talks to the algorithm first.”
– U.S. Army CPT, Cyber Fires Cell


Recommendations for the U.S. and NATO

  1. Codify AI Authority Boundaries in Doctrine
    – Define when AI may suggest, veto, or override tactical decisions—especially in kill chain compression and sensor-fusion nodes.
  2. Build “Synthetic Command Liaisons” into the TOC
    – Train officers and NCOs to interface with and challenge AI systems intelligently—not just trust or ignore them.
  3. Stress AI-Adaptive Command at NTC and JRTC
    – Red team commanders with synthetic adversaries. Simulate AI saturation on both sides of the fight.
  4. Invest in Model Trustworthiness, Not Just Accuracy
    – AI models must explain their decisions, especially under fire. Black-box thinking will kill commanders’ confidence—and Soldiers.
  5. Observe and Exploit Adversary AI Doctrine
    – China will experiment fast. Russia will improvise under pressure. NATO needs OSINT and HUMINT on how they assign trust to synthetic assets.

Command by Code?

Autonomous systems won’t replace commanders—but they’re already rewriting what command looks like. The future battlefield may not be commanded by a human, but by a network of synthetic subordinates, interpreting commander’s intent in real time, optimizing fires and maneuvers, and making decisions too fast for humans to keep up.

The real question isn’t if AI will change the chain of command. It’s whether we’re adapting fast enough to stay in control of it.