Commanders Must Be Technocrats: Lt Gen Vipul Shinghal Flags AI Revolution, Calls For Meaningful Human Control

Lt Gen Shinghal stressed that military leaders can no longer rely solely on traditional command skills. Instead, commanders must understand the technological architecture behind AI systems, including how algorithms process battlefield data, what datasets are being used, whether the system’s analysis could be manipulated or flawed.

AI warfare Indian Army, future warfare artificial intelligence military, Indian Army AI systems battlefield, commanders technocrats AI warfare, AI decision support military operations, meaningful human control AI weapons, Indian defence AI technology strategy, AI drones surveillance warfare, AI battlefield decision making speed, ethical challenges AI military systems, Indian armed forces AI adoption, military artificial intelligence risks, future war technology AI defence, AI targeting systems military ethics, Indian Army modern warfare technology

The senior officer emphasised that the Indian armed forces are adopting AI-enabled systems across areas. Image courtesy: X.com/@ANI

There is no doubt that there is hardly anything left that the artificial intelligence can’t do. From conceptualising to designing or even making softwares, AI is significantly changing the world as we knew it. The modern-day battlefield is no stranger to AI, with armies across the world adopting it for varied purposes.

From predictive maintenance to incorporating it into drones and other weapons, artificial intelligence is transforming warfare into a data-driven, accelerated, and precision-based endeavor as a force multiplier. As AI rapidly reshapes modern battlefields, a senior Indian Army official has said future military commanders will need to become “technocrats”.

Deputy Chief of Army Staff Lieutenant General Vipul Shinghal highlighted the transformational impact of AI on warfare, warning that while technology can dramatically speed up decision-making, human judgement and ethical responsibility must remain central to military operations.

Speaking at the Synergia Conclave in New Delhi, the senior officer outlined how AI is already changing combat dynamics, from surveillance to targeting, and why military leadership must evolve alongside the technology. With this, he called for the rise of commaders as leaders capable of understanding algorithms, data flows and machine-generated decisions.

AI is changing the speed of warfare

According to Lt Gen Shinghal, one of the most profound changes brought by AI is the dramatic compression of decision-making timelines on the battlefield. Modern AI systems can now combine data from drones, satellites, ground sensors, real-time surveillance feeds.

Within seconds, these systems can detect a target, analyse the threat and recommend military action. “The time between spotting a target and the system suggesting action is very small,” Shinghal said. While this capability gives armed forces a significant operational advantage, it also creates immense pressure on commanders to make split-second decisions in high-intensity combat scenarios.

Why army commanders must become ‘technocrats’

Lt Gen Shinghal stressed that military leaders can no longer rely solely on traditional command skills. Instead, commanders must understand the technological architecture behind AI systems, including how algorithms process battlefield data, what datasets are being used, whether the system’s analysis could be manipulated or flawed.

“More and more commanders have to start becoming technocrats and understand what is happening inside the system,” he stated at the conclave.

Without that understanding, commanders risk blindly trusting machine recommendations that may not reflect the complete ground reality.

The battlefield dilemma: Trusting AI vs human judgement

AI-driven decision-support tools create a new ethical and operational dilemma for commanders. If a system recommends a strike and the commander fails to act quickly, the opportunity could be lost, potentially allowing an adversary to escape or launch an attack.

But if the commander acts on AI advice that turns out to be wrong, the consequences could be severe. “If he doesn’t press that button and act, he may lose the opportunity. If he does and the decision is wrong, then where is the moral buffer?” Shinghal asked.

He emphasised that AI systems cannot carry moral responsibility, that burden will always remain with the human commander.

When AI can get it wrong

The Deputy Army Chief illustrated the risks with a hypothetical scenario. An AI system might detect movement in a designated conflict zone and assume the presence of enemy troops. But the system may not know that a civilian evacuation is taking place in the same area.

In such cases, only human judgement can prevent a mistaken strike with potentially catastrophic consequences. “The commander pauses and asks, ‘What does the system not know?’” Shinghal said.

The Indian armed forces are increasingly integrating AI-enabled systems across multiple operational domains, including surveillance and reconnaissance, battlefield intelligence, logistics planning, inventory and resource management. These technologies are expected to enhance operational efficiency, situational awareness and precision.

“As far as the Indian armed forces are concerned, we are fully aligned with the transformational nature of AI,” Shinghal said.

‘Meaningful human control’ is essential

Despite the benefits of AI, the Army has emphasised that lethal decisions cannot be fully delegated to machines. Even highly advanced systems with 90% accuracy rates still leave room for potentially devastating errors. “Even with 90% accuracy… that 10% is too dangerous to be allowed to operate automatically,” Shinghal warned.

Military systems must therefore ensure what is globally known as “meaningful human control”, meaning commanders must always have the ability to override AI recommendations, abort or delay a strike, or even intervene in automated processes.

He also highlighted a key concern which is technological sovereignty in AI systems. For military AI to be trusted, the armed forces must control the entire technological ecosystem, including data sources, AI models, networks, hardware infrastructure. Without such control, the system could be vulnerable to external manipulation or cyber threats.

“Data, models, networks and hardware need to be there. Otherwise the commander cannot have trust in the system,” Shinghal said.

‘Black Box’ to ‘Glass Box’ AI

Transparency in how AI systems reach conclusions is also critical. Many AI models function as “black boxes,” producing results without clearly explaining how they arrived at them. Lt Gen Shinghal argued that military systems must move toward “glass box” AI, where commanders can clearly understand the reasoning behind machine recommendations.

“The black box has to become a glass box,” he said. Despite rapid technological change, the Army stressed that ethical principles must remain central to the use of force. “In the Indian context, we have always believed that shakti must go hand in hand with dharma, force must go hand in hand with righteousness,” Shinghal said.

Exit mobile version