Skip to main content
New research explores artificial intelligence’s role in the military-industrial complexINNOVATION INSIGHTS

New research explores artificial intelligence’s role in the military-industrial complex

04-11-2025UNF staff
Share

Artificial intelligence (AI) is often praised for its potential to improve everyday life. But what happens when this technology is integrated into military and defense systems? Could AI reduce harm or, instead, accelerate conflicts and increase global instability?

These are the type of questions recently explored by Professor Patty Zakaria, from University of Niagara Falls Canada's Master of Data Analytics program. This summer, she enrolled in the AI Safety, Ethics, and Society program run by the Center of AI Safety in California, completing a project that examined AI’s role in the military-industrial complex.

For Innovation Insights, Professor Zakaria discusses this important research, and the ethical challenges AI poses in military and defense systems.

My interest grew out of two strands of my work. My doctoral research focused on nuclear proliferation and how technologies shape security dynamics. I later shifted to governance, anti-corruption, and transparency to sharpen my focus on oversight and accountability in high-risk domains. Recognizing how countries often take time to regulate AI, despite the potential of an AI arms race, this felt like an urgent and natural area of research for me.

AI-enabled weaponry carries profound technical risks. These systems can compress decision timelines, lowering the threshold for using force and escalating conflicts at machine speed. They can also misidentify civilians as combatants, as seen in past drone strikes. Perhaps most concerning is their brittleness: in my research, I highlighted how such systems can fail unpredictably when confronted with scenarios outside of their training data. This brittleness makes them inherently unreliable in high-stakes environments like military targeting, where even minor errors can result in catastrophic consequences.

Beyond these technical vulnerabilities, I found an additional negative multiplier effect. AI-enabled technologies can reduce military casualties by allowing soldiers to operate remotely, such as through drones. If AI reduces the risk of soldier casualties, societies may become more tolerant of prolonged conflicts, even when civilian casualties remain high. This creates a dangerous dynamic where wars endure longer and the risks of escalation increase.

Military AI requires stricter oversight than civilian AI. Leaders need to ensure meaningful human control, transparency in how systems are developed and used, and accountability for their deployment. Cybersecurity is another big concern. If AI-enabled weapons are hacked or manipulated, the consequences could be catastrophic.

Ultimately, leaders must approach AI in the military-industrial context with tailored regulations and risk management strategies, recognizing that the stakes are far higher than in civilian domains.

The most critical steps apply across both civilian and military domains. AI must be built with strong safety principles, like redundancy, transparency, and fail-safes, to reduce the chance of catastrophic failures. Risk management frameworks, similar to those in other high-stakes industries, should be used to identify hazards, mitigate extreme risks, and prepare for rare but devastating events. These safeguards need to be paired with governance mechanisms that ensure accountability, ethics, and clear responsibility throughout an AI system’s lifecycle.

International cooperation is equally vital to prevent an AI arms race. Competitive pressures could push countries to deploy unsafe systems, a risk heightened in the military sphere where autonomous weapons raise issues of war, escalation, and global stability. Cybersecurity adds another layer of concern, as AI-enabled weaponry could be exploited or hijacked.

In short, safe development demands a layered approach that combines technical safeguards, governance, and global agreements. Without this, AI risks becoming both a civilian hazard and a destabilizing force in international security.

They can collaborate through multi-stakeholder frameworks that combine technical expertise with ethical oversight. This includes creating standards for testing, establishing joint international agreements, and ensuring independent audits of military AI systems to reduce risks of escalation and misuse.

I would encourage them to engage with both the technical and ethical sides of AI. Understanding coding and data science is important, but so is grappling with questions of governance, human rights, and risk management. Careers in AI need people who can bridge the gap between technical innovation and societal responsibility.

As conversations around AI continue to evolve, Zakaria’s research highlights the urgent need for thoughtful governance, international cooperation, and strong ethical oversight, especially when applied to military and defense systems. The challenges extend beyond technology and touch on human rights, accountability, and the ways societies view conflict itself.

By examining both the risks and opportunities, her work adds an important voice to the growing dialogue on how AI can be developed responsibly without compromising global security.