
In the absence of traditional nuclear treaties, researchers are proposing a new approach to monitoring the world’s nuclear arsenals using satellites and artificial intelligence—a system they call “cooperative technical means.”
The End of an Era in Nuclear Arms Control
For decades, the world’s nuclear powers relied on treaties to reduce nuclear weapons, bringing the global count down from over 60,000 in 1985 to approximately 12,000 today. However, with the recent expiration of the New START treaty on February 5 and growing international tensions, traditional arms control mechanisms are disappearing.
Matt Korda and Igor Morić from the Federation of American Scientists have outlined what they call a “Plan B” in their report “Inspections Without Inspectors.” Their proposal comes at a critical time when countries are building new nuclear weapons and trust between nations is at an all-time low.
How the Proposed System Would Work
The proposal leverages existing satellite infrastructure to monitor nuclear facilities remotely, eliminating the need for on-site inspectors that countries increasingly reject. The system would work through:
- Satellite monitoring of ICBM silos, mobile launchers, and production facilities
- Coordinated verification where countries agree to specific inspection times
- AI systems trained to recognize patterns and detect changes at nuclear sites
- Human verification of AI-flagged observations
Challenges and Limitations
Despite its potential, this approach faces significant hurdles:
First, effective implementation requires a level of cooperation among nuclear powers who would need to agree to participate in this verification regime—potentially opening silos at predetermined times for satellite observation.
Second, AI systems require substantial datasets to function properly. As Sara Al-Sayed of the Union of Concerned Scientists points out, “You have to build these bespoke datasets for each country,” accounting for differences in how nations construct and maintain their nuclear infrastructure.
Third, the reliability of AI remains questionable. AI systems frequently fail, contain security flaws, and often operate as “black boxes” where even their designers cannot fully explain their functioning.
Finally, countries would need to agree on what exactly these AI systems would monitor—whether detecting objects, classifying observations, or tracking changes over time—which would require new rounds of negotiation.
A Stopgap Measure, Not a Solution
The researchers acknowledge this approach is imperfect. Al-Sayed questions the fundamental premise, asking: “If you believe automation is necessary, then you are in this paradigm where you feel like you need to catch every instance of your adversary cheating. How could parties even agree to negotiate if the assumption is that every action could be suspicious?”
Despite these limitations, Korda views the proposal as a form of triage—an imperfect but necessary bridge until more comprehensive solutions can be developed. “A successor to New START is not going to put us on the path towards disarmament,” he notes. “It’s just going to help us prevent a real spiral into hundreds more additional nuclear weapons being deployed.”
In a world where traditional arms control is collapsing, even an imperfect technical solution may be preferable to no monitoring at all.


GIPHY App Key not set. Please check settings