Artificial intelligence is fast becoming a linchpin of strategic military relevance. For the Army, integration of artificial intelligence has moved beyond the drawing board into tangible applications, whether through autonomous intelligence, surveillance and reconnaissance platforms, predictive logistics or real-time decision-support systems.
These tools already are shifting the tempo of combat and reshaping what readiness looks like in the modern era.
As a military intelligence officer, I’ve worked directly with artificial intelligence-enhanced platforms like the Maven Smart System, which has proven how machine learning can ease the burden of information overload. Yet, despite these gains, gaps remain. Many artificial intelligence (AI) systems hinge on constant connectivity—a luxury we often can’t afford in contested or denied environments. The truth is blunt: Tools that fail under pressure aren’t tools, they’re liabilities. For AI to genuinely transform tactical engagements, it must be robust enough to function reliably at the edge, not just at headquarters. The Army lacks that seamless hardware-software integration needed in disconnected spaces.
The Army must fast-track tactical AI development if it hopes to keep pace with rivals like China, whose military strategy centers on “intelligentised” warfare—a doctrine embedding AI from top to bottom. Unlike the U.S., which often leans on commercial, off-the-shelf technology, China builds with end-state integration in mind. Whether that proves superior remains to be seen, but the clock is ticking.
The Army’s foray into AI now spans both battlefield operations and back-office systems. The push centers on three main goals: smarter decisions, agile force planning and resilient logistics. Determining where the force stands is essential before planning where to go.
Operational Tools
Platforms like Maven enhance intelligence, surveillance and reconnaissance (ISR) functions, boosting analyst throughput while preserving human oversight. With this tool, hours of surveillance footage can be combed through in minutes, not days. In parallel, large language models (deep learning algorithms that can generate content using large datasets, according to NVIDIA) quietly are revolutionizing planning and paperwork. Whether drafting operational orders or automating personnel records, these tools reduce administrative friction and free human bandwidth for what really matters.
Meanwhile, Army special operations is dipping its toes into generative AI for influence operations—think voice synthesis and tailored messaging for psychological operations. It’s a new frontier, one where persuasion and perception could be as important as firepower.
The Army isn’t just fielding shiny new toys—it’s also reworking the digital plumbing. Consolidated cloud contracts and resource streamlining cut costs and speed deployment. Task Force Lima, the War Department’s generative AI brain trust, makes sure all this innovation stays ethically grounded and mission-aligned.
Avoiding Roadblocks
But it’s not all smooth sailing. High-quality data remains scarce, especially classified data that matters most. Add to that the sheer computational size of these tools—many of which require more juice than deployed units can spare—and it becomes clear that scaling AI isn’t as easy as flipping a switch.
There’s also ethical terrain to navigate. The U.S. sticks to a “human-in-the-loop” doctrine for lethal decisions, a stance that may slow soldiers compared to adversaries more willing to hand over the reins to machines. It’s a noble principle, but one that comes with tactical trade-offs. But again, time will tell how the U.S. and its adversaries approach this subject.
The real test for AI lies not in sanitized labs or rear-echelon offices but in dirt-under-the-nails combat scenarios. Tactical units need tools that function independently, in chaos, under duress. AI, for instance, allows drones and unmanned vehicles to detect threats without need for cloud uplinks. That’s a game changer in jammed or isolated environments.
Similarly, predictive maintenance tools help commanders get ahead of breakdowns before they cost lives or missions. Then there’s fire coordination. Israel’s Fire Factory platform, for example, uses algorithms to pick targets and schedule strikes, balancing speed and precision in ways humans can’t match. On the cyber front, AI flags threats and suggests countermeasures in real time, essentially becoming a digital battle buddy in electronic warfare zones. Command-and-control platforms also are evolving. AI fuses sensor feeds, terrain data and troop movements into bite-sized decisions for overwhelmed commanders.
AI shouldn’t replace decision-makers; it should be a tool in the tool shed. The most promising systems don’t issue orders, they offer options, allowing humans to run things while acting faster and smarter. For a soldier engaged in a fluid firefight, that matters. To truly grasp what’s at stake, the Army must study how rivals wield this technology.

A Head Start
The Chinese military’s doctrine of “intelligentised” warfare isn’t just jargon, it’s a wholesale redesign of how it fights. The Chinese are merging civilian and military technology efforts, giving them a head start in research and development and deployment. Their goal? Undermine U.S. kill chains and exploit our human-machine coordination gaps.
Russia, by contrast, banks on AI for asymmetrical wins. They’ve tested autonomous drones and electronic spoofing tools in real-world combat, particularly in Ukraine and Syria. They’re leaning hard into AI-generated disinformation, weaponizing synthetic media to cloud judgment on and off the battlefield. Russia is planting bots on social media to spread hate and disinformation.
With rapid progress comes new vulnerabilities. AI may speed up decisions, but that acceleration can narrow windows for de-escalation, increasing the odds of miscalculation. Deepfakes and manipulated media threaten to warp situational awareness. Legacy systems, meanwhile, simply aren’t built to support AI, and upgrading them isn’t cheap or easy.
There’s also the specter of data poisoning—where adversaries inject bad data to mislead AI systems. It’s a subtle but serious risk that could turn our tools against us.
Taking Action
Winning tomorrow’s wars means acting today. Tactical AI can’t stay trapped in think tanks or pilot programs. It needs boots-on-the-ground integration. First and foremost, the Army needs tools that work when communications go dark. Investing in disconnected-capable AI should be a top priority. Testing these tools during combat training center rotations would provide the realistic conditions needed to refine them.
The ethical framework laid out by the War Department’s Chief Digital and Artificial Intelligence Office in 2023 should be upheld, but without letting red tape stall innovation. Keeping humans in the loop for lethal decisions isn’t just a moral stance, it’s part of what separates Americans from less scrupulous rivals. At times it will hold us back, as previously mentioned, but time will tell who made the right decision in this category. That said, educating the force—from privates to colonels—on AI’s limits and potential is essential.
Interoperability can’t be an afterthought. Whether operating within NATO or ad hoc coalitions, U.S. AI systems must play well with others. Partnered research and development efforts also could speed development and deepen strategic ties. The next war won’t be fought by the U.S. alone.
The traditional acquisition process is a mismatch for fast-evolving technology. The Army must embrace more flexible procurement pathways while clearly defining the kinds of AI tools it needs—in ISR, influence operations and maneuver support. AI at the battalion level will help ensure the intelligence officer can provide ground troops with timely and accurate information.
Sense of Urgency
AI isn’t coming to the battlefield—it’s here. The question is whether the Army can embrace it in a way that enhances combat effectiveness without compromising ethics or reliability. China’s integration model and Russia’s asymmetric gambits highlight the urgency. The U.S. must pivot from experimentation to execution, from PowerPoint briefings to battalion, company and platoon-level deployment.
That path demands more than technology. It requires cultural shifts, smarter governance, agile infrastructure and cross-alliance cooperation. Done right, tactical AI won’t just be another widget but a force multiplier defining the next era of warfare. As a military intelligence soldier, I can’t wait to use the latest and greatest. The clock’s ticking, and the stakes couldn’t be higher.
* * *
Capt. Stafford Harmond is a military intelligence officer assigned as the officer in charge of the intelligence staff section for 1st Battalion, 37th Armored Regiment, 2nd Brigade Combat Team, 1st Armored Division, Fort Bliss, Texas. He is currently serving at the U.S. southern border. He has deployed to Syria and Iraq. He has a bachelor’s degree in justice studies from Montclair State University in New Jersey, and is working on a master’s degree in intelligence studies from American Military University in West Virginia.