in

AI’s Double-Edged Sword: Growing Capabilities in Finding and Exploiting Cybersecurity Vulnerabilities

The cybersecurity landscape is rapidly evolving as AI models become increasingly adept at identifying system vulnerabilities—a development that brings both promising security tools and concerning new threats.

AI’s Growing Prowess in Vulnerability Detection

RunSybil, a cybersecurity startup, recently witnessed their AI tool Sybil identify a previously unknown security flaw in a customer’s federated GraphQL deployment. This discovery required sophisticated understanding of multiple systems and their interactions, demonstrating a significant leap in AI reasoning capabilities.

The incident highlights how AI systems are becoming remarkably effective at finding zero-day bugs and other vulnerabilities that even human experts might miss. RunSybil’s founders, Ionescu and Herbert-Voss, were initially puzzled by their tool’s discovery since the vulnerability wasn’t documented anywhere online.

The Inflection Point in AI Security Capabilities

UC Berkeley computer scientist Dawn Song describes recent advances in AI as an “inflection point” for cybersecurity. Improvements in simulated reasoning and agentic AI—which can perform tasks like web searches and running software tools—have dramatically enhanced models’ cyber capabilities.

Song’s research team created CyberGym, a benchmark for testing how well large language models can identify vulnerabilities in open-source software. The results show rapid improvement: Anthropic’s Claude Sonnet 4 identified about 20% of vulnerabilities in July 2025, while Claude Sonnet 4.5 found 30% just three months later.

The Security Implications

This technological progress creates a concerning dynamic: the same AI intelligence that helps detect vulnerabilities can be weaponized to exploit them. As Herbert-Voss notes, “AI can generate actions on a computer and generate code, and those are two things that hackers do.”

The ability to find zero-day vulnerabilities at low cost potentially shifts the advantage to attackers, creating new urgency for defensive measures.

Potential Countermeasures

Experts suggest several approaches to address these emerging risks:

  • Leveraging AI to assist cybersecurity professionals in defensive operations
  • Having frontier AI companies share models with security researchers before public release
  • Rethinking software development with a “secure-by-design” approach
  • Using AI to generate more secure code than human programmers typically produce

Song’s research demonstrates that AI can actually help create more secure code, potentially offering a long-term solution to vulnerabilities.

The Road Ahead

As AI capabilities continue to advance, the cybersecurity community faces a race to develop defensive measures that can keep pace with potential threats. The dual-use nature of AI in cybersecurity presents both challenges and opportunities, requiring thoughtful approaches to harness these powerful tools for protection rather than exploitation.

What do you think?

Avatar photo

Written by Thomas Unise

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Protecting AI-Generated Robotics Innovations: Patents vs. Trade Secrets

Protecting AI-Generated Robotics Innovations: Patents vs. Trade Secrets

OpenAI’s Financial Cliff: Why Experts Predict Cash Problems by 2026