Ai-fueled bug bounty boom overwhelms crypto security teams with noisy reports

AI-Fueled Bug Bounty Boom Overwhelms Crypto Security Teams

Artificial intelligence has supercharged the pace of bug hunting across the crypto sector, but it is also burying security teams under a mountain of low‑quality reports and false alarms.

Bug bounty programs – which pay security researchers for responsibly disclosing vulnerabilities – have long been a cornerstone of cybersecurity in crypto. As protocols handle billions in user funds and rely on complex smart contracts, they depend heavily on independent researchers to spot flaws before attackers do.

The arrival of widely accessible AI tools has radically changed how those programs operate.

AI Supercharges Volume – and Noise

By automating code review and vulnerability scanning, AI systems now allow individual researchers to comb through enormous codebases in a fraction of the time. Where a single researcher might previously have reviewed one component manually, they can now ask an AI system to generate dozens of potential findings across an entire protocol.

That efficiency is showing up in the data. One major bug bounty platform reported around 85,000 *valid* submissions in 2025, a 7% increase from the previous year. Behind that figure stands an even larger wave of total reports – many of them driven or drafted by AI.

Yet the rise in quantity has not been matched by quality. Security teams are increasingly reporting a flood of “AI slop”: vulnerability reports that sound plausible, are often well-written, but describe issues that either don’t exist or have already been addressed.

Crypto Protocols Face a 900% Submission Spike

Cosmos Labs co-CEO Barry Plunkett described how dramatically things have shifted. According to him, the project’s bounty program has seen a roughly 900% jump in submission volume compared to the prior year, reaching about 20 to 50 reports every day. Within that surge are more genuine discoveries than ever before – but also a vast number of incorrect or trivial findings.

This deluge is stretching security resources. Each reported issue must be evaluated, reproduced, and triaged. Even a false alarm consumes valuable expert time. For smaller crypto teams, already under pressure to ship code and maintain infrastructure, simply keeping up with the inbound reports is becoming a serious operational challenge.

When AI Hallucinations Hit Security

The crux of the problem lies in how current AI models work. They excel at pattern recognition and can quickly flag code snippets that resemble known vulnerability patterns. But they also “hallucinate” – confidently outputting incorrect conclusions or invented details.

In the context of bug bounties, that means:

– Reports describing attack paths that are logically impossible
– Claims about missing checks or validations that actually exist
– Misinterpretation of protocol logic or cryptographic assumptions
– Recycled or slightly rephrased versions of previously reported issues

These AI-assisted reports tend to be written in authoritative language, making them harder to dismiss at a glance. Security engineers must carefully verify each claim, burning hours on findings that ultimately turn out to be non-issues.

Burnout and Backlash in the Open-Source World

The problem is not limited to crypto, but is particularly acute there due to the financial incentives and public nature of the code. Earlier this year, Daniel Stenberg – the creator of the widely used curl data transfer tool, which underpins many applications including blockchain infrastructure – announced he was shutting down his bug bounty program.

His explanation: he was worn out dealing with a surge of low‑value, AI-generated vulnerability reports. Stenberg described being inundated with findings that were either misunderstandings of how curl actually worked or entirely fictional issues that existed only in the AI’s analysis.

His decision highlighted a growing tension: bug bounties are essential to security, yet the wave of AI-driven spam risks making them unmanageable for maintainers.

Industry-Wide Spike in Payouts and Submissions

Kadan Stadelmann, blockchain developer and chief technology officer at Komodo Platform, has observed the same trend across multiple organizations. Both bug bounty submissions and actual payouts are rising, a sign that AI-empowered researchers *are* finding real issues that might have been missed before.

The paradox is clear:

– AI is *increasing* the number of legitimate, serious vulnerabilities uncovered.
– AI is *also* generating so much noise that teams struggle to locate those critical findings in time.

This dual effect is forcing protocol teams and security platforms to rethink their entire approach to vulnerability intake and triage.

How Bug Bounty Programs Are Adapting

Facing this AI-driven wave, Cosmos Labs and other projects have started to recalibrate how they run their bug bounty programs.

Key adaptations include:

Stricter scoring and severity frameworks
Submissions are now evaluated against tighter technical criteria. Reports with vague descriptions, missing proof-of-concept code, or speculative logic are quickly downgraded or closed.

Prioritizing trusted researchers
Teams are increasingly giving priority attention to participants with a proven record of accurate, high-impact findings. Brand‑new or anonymous accounts with AI‑generated language but no track record may receive lower priority in triage queues.

Partnering with advanced triage providers
Some organizations are working with specialized bug bounty providers that maintain dedicated security teams and more sophisticated triage pipelines, helping them filter, classify, and route high‑risk submissions faster.

This move toward more structured, gated, and reputation‑based programs mirrors a broader shift in the security industry toward curating who gets direct access to maintainers’ attention.

AI as Both Problem and Potential Fix

Ironically, the same technology that is overwhelming bug bounty systems may also be the key to saving them.

Stadelmann argues that crypto teams will need to develop “AI deterrents” – automated filters and analysis systems that sift through incoming bug reports before a human ever sees them. Instead of manually reading hundreds of similar-looking claims, security engineers could rely on AI models to:

– Cluster duplicate or near‑duplicate submissions
– Detect low‑effort, template-based reports likely generated with minimal human review
– Automatically verify simple test cases or replay proofs-of-concept in a sandbox
– Flag high‑risk issues for urgent review based on code context and protocol-critical components

For decentralized projects with small core teams, this kind of automation may be the only realistic way to cope with the increasing volume.

The Rising Stakes for Decentralized Systems

Bug bounties are especially critical in crypto because of the nature of blockchain systems:

Code is often immutable once deployed, particularly in fully decentralized protocols.
Exploits are financially motivated, with attackers able to steal large sums within minutes.
Attacks are highly public, damaging reputations and user trust overnight.

Past incidents have shown that a single overlooked vulnerability can result in losses of hundreds of millions of dollars. Against this backdrop, the cost of ignoring or mishandling legitimate bounty submissions is enormous.

This is why most major protocols are unwilling to simply shut down their programs, even in the face of mounting AI spam. Instead, they are working to harden and modernize their intake processes.

Best Practices Emerging for the AI Era

From conversations across the industry, a set of best practices is beginning to emerge for managing AI-driven bug bounty activity:

1. Clear submission guidelines
Protocols are tightening their requirements: reproducible steps, working proof-of-concept code, and a precise impact description are becoming non‑negotiable.

2. Mandatory originality and disclosure standards
Some teams now require researchers to explicitly confirm they have tested the issue themselves and are not merely forwarding AI-suggested code snippets without verification.

3. Layered triage workflows
Low‑severity or poorly documented reports may first pass through junior analysts or automated checks, reserving senior engineers for high‑confidence, high‑impact findings.

4. Reputation and scoring systems
Platforms increasingly weigh the historical accuracy of a researcher’s submissions when prioritizing incoming reports, subtly discouraging those who rely on indiscriminate AI‑driven guessing.

5. Training security teams to spot AI‑generated patterns
Repeated linguistic patterns, generic descriptions, and misconceptions of basic protocol architecture are becoming recognizable red flags for low‑quality AI‑assisted reports.

What This Means for Bug Hunters

For legitimate security researchers, AI is a powerful ally – but not a replacement for deep understanding.

Those who use AI most effectively tend to:

– Treat AI as a *copilot*, not an oracle, using it to generate ideas, narrow search areas, or assist with code review.
– Manually validate every suspected bug, confirming exploitability and real-world impact before submission.
– Provide clear, concise documentation and proof-of-concept exploits, demonstrating they truly grasp the underlying issue.

In the new environment, researchers who simply copy AI outputs into bug bounty portals without verification will increasingly see their reputation – and chances of receiving payouts – decline.

The Future of AI-Driven Security in Crypto

The crypto industry stands at an inflection point. AI has made security research more accessible and more powerful, but also more chaotic. Over the next few years, several trends are likely:

Convergence of offensive and defensive AI
The same advanced code-analysis models used by bounty hunters will be embedded into development pipelines and continuous integration systems, catching many bugs before they ever reach production.

Smarter, protocol-aware triage systems
AI tools will be trained on the specific logic and architecture of individual blockchains or DeFi protocols, enabling more contextual understanding of how critical a reported issue truly is.

Tiered and invitation-based bounty programs
Open programs may coexist with invite-only tiers reserved for high‑reputation researchers, balancing openness with signal‑to‑noise control.

Stronger alignment of incentives
Payment schemes might evolve to reward depth and originality rather than raw volume, discouraging shotgun AI submissions and encouraging higher‑quality research.

In the meantime, security teams must navigate a difficult reality: they can neither ignore AI nor fully trust it. The challenge is to harness AI’s strengths while building defenses against its weaknesses.

AI has undeniably accelerated the discovery of security flaws in crypto. Whether it makes the ecosystem safer overall will depend on how quickly protocols, platforms, and researchers adapt their practices to this new, noisy, and intensely automated era of bug hunting.