AI agents are accelerating vulnerability discovery. Here’s how AppSec teams must adapt.
Summary
AI is rapidly accelerating vulnerability discovery, forcing AppSec teams to adapt by integrating AI into threat modeling, code review, and developer workflows to keep pace.
AI is finding security flaws faster than ever
Artificial intelligence is dramatically accelerating the discovery of software vulnerabilities. The principle that many eyes make bugs shallow now operates at machine speed and scale.
The critical question for security teams is whether they or malicious threat actors will find these flaws first.
AI red teaming is here
An autonomous AI system called XBOW recently topped HackerOne’s US bug bounty leaderboard. In just 90 days, it submitted over 1,060 vulnerabilities, outperforming thousands of human researchers.
These were not theoretical findings. Bug bounty programs have already used XBOW’s work to resolve 130 critical vulnerabilities, with hundreds more in triage. The system operates autonomously and can test thousands of targets simultaneously without rest.
HackerOne reports that autonomous agents submitted more than 560 valid vulnerability reports in 2025 alone. Known flaws that once required skilled human analysis are now discoverable at machine scale.
Threat modeling at AI speed
Major enterprises are deploying AI to keep pace with modern development. JPMorgan Chase developed an AI threat modeling system called Auspex.
It uses specialized prompts to guide AI through analyzing system architecture, identifying threats, and suggesting mitigations. This process, which traditionally takes weeks, is collapsed to minutes.
Auspex combines generative AI with expert frameworks and the bank’s institutional knowledge. It processes diagrams and descriptions to generate detailed threat matrices for developers.
The new security playbook
Traditional application security is a bottleneck. Code review backlogs grow, and vulnerabilities slip into production because humans can't scale.
AI changes this equation. Security teams must redeploy resources from manual tasks to building AI-integrated workflows. A GitLab survey found teams lose 7 hours per week to inefficient processes.
Several AI-driven strategies can help modern AppSec teams scale effectively:
- Build queryable security intelligence: Ingest all bugs and reports into structured data stores. This lets AI systems instantly identify similar vulnerability patterns across your entire codebase.
- Fine-tune models for your environment: Use Retrieval-Augmented Generation (RAG) to augment large language models with your organization's specific security standards and anti-patterns. Research shows this significantly improves code review accuracy.
- Integrate AI into developer toolchains: Embed analysis directly into IDEs and CI/CD pipelines. Developers receive real-time security guidance as they write code, not weeks later.
- Apply AI to threat modeling at scale: Follow JPMorgan’s lead. Aim for AI-generated threat models for 100% of your systems, rather than expert-reviewed models for just 10%.
- Leverage AI to improve SAST tools: Use AI to understand code context and data flow. This dramatically reduces the false positives that plague traditional Static Application Security Testing and desensitize developers.
Prioritizing security for the AI era
Security leaders face a pivotal shift. The old model of adding more engineers to code review cannot match AI-accelerated development.
This transformation requires proactively redesigning workflows and rethinking team skills for human-AI collaboration. Organizations that strategically integrate AI into their security practices will build stronger defenses with greater efficiency.
The race to find vulnerabilities is now automated. Security teams must ensure their AI is running faster than the attackers'.
Related Articles
‘An AlphaFold 4’ – scientists marvel at DeepMind drug spin-off’s exclusive new AI
Isomorphic Labs, a Google DeepMind spin-off, has developed a proprietary AI model, IsoDDE, that predicts protein-drug interactions for drug discovery, but unlike AlphaFold, it is not being shared with the broader scientific community.
OpenAI’s Sam Altman: Global AI regulation ‘urgently’ needed
OpenAI's Sam Altman urgently calls for global AI regulation and an international oversight body for safe, fair development.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
