Three months ago, an average of six paid AI accounts were sold each day on the criminal marketplaces KasadaIQ monitors. In Q1 2026, that number was 3,845, a 640x increase against essentially flat supply. It’s the clearest signal we’ve seen that premium AI capabilities are no longer an adversary experiment, but operational infrastructure.
The Q1 2026 KasadaIQ Threat Intelligence Report unpacks what that shift means across the automated threat landscape. It’s not the story most of the industry is telling. Most vendors are still framing AI as a tactic: something adversaries use to write better phishing emails or generate better lures. KasadaIQ’s data says something more fundamental has happened.
AI as a tool, a target, and a commodity
Adversaries are now running on AI.
As a tool, AI is maturing fast in adversary hands. In Q1, KasadaIQ observed a surge of novices in botting communities citing “vibe coding” and no-code agent builders as enablers, with some claiming to have produced functional bots in under an hour. Mentions of AI skills in adversary job ads across communities KasadaIQ monitors jumped 248% year-on-year. At the same time, a stigma-like hesitancy toward AI persists among seasoned developers. Some fraud operators still rely on spreadsheets. The barrier to entry is falling fast, but unevenly.
As a target, AI systems are creating new attack surfaces. KasadaIQ is tracking an emerging class of threats aimed at AI agents themselves: commercialized memory poisoning toolkits, infostealers developed specifically to exfiltrate AI agent configuration files and campaigns designed to harvest the content of AI prompts and generated outputs. The threat model has expanded from stealing what agents know to hijacking what agents can do.
As a commodity, AI accounts themselves have become high-value merchandise. That’s the 640x number. Adversaries are paying for premium AI access at scale because free-tier capabilities aren’t enough anymore.
Verification is a price point, not a barrier
The other shift worth naming is what’s happening in credential markets. 13.2 million account sales on criminal marketplaces in Q1 were tagged “verified,” “KYC,” or “2FA.” Those accounts generated $24.6 million in observed revenue.
This matters because most defensive investment still treats verification as a control. Adversaries treat it as a product line. KasadaIQ is tracking organized fraud groups advertising verification bypass services across 50+ platforms: selfie liveness bypass, document templates, 2FA interception bots, KYC bypass spanning 250+ countries.
One operator KasadaIQ tracks, who we call Casio Carl, advertises ready-made PayPal accounts with identification verification already set up, selfie liveness bypass (“head rotation” as an example), passport templates and utility bills from 50+ countries, US SSN card templates, and “fraud starter kits” with step-by-step browser setup guides and face-to-liveness tutorials. Another adversary, Maple Forge, offers a custom Canadian synthetic identity with an allegedly unused Social Insurance Number, a credit-checking service account and a filed credit card application for around $200. Pay another $200 and you get postal operator verification, a physical bank card shipped to a drop location and a SIM card for 2FA.
These aren’t lone operators. According to LexisNexis’s 2026 Cybercrime Report, synthetic identity fraud has now overtaken true identity theft globally for the first time; an 8x increase year-on-year. Fabricated identities don’t have a victim to raise the alarm. They’re purpose-built to pass verification checks at scale, and operators like Casio Carl and Maple Forge are supplying the documentation to back them up.
The insider threat splintered
The report also tracks a shift in how the insider threat is evolving. It used to be one problem. Now it’s two.
Human insiders haven’t gone away. KasadaIQ continues to observe retail employees posting in reselling communities about “backdooring” hype items like Pokémon card blisters straight off their own shop floors. In parallel, organizations are standing up AI agents with employee-equivalent access: API keys, persistent credentials, the ability to act autonomously. That’s a new class of insider, one without the judgment to recognize manipulation. Adversaries have already figured this out: the commercialization of memory poisoning toolkits and infostealers targeting AI agent identity both point at the same emerging surface.
What this means for defenders
Three takeaways worth acting on:
- Govern AI agents like service accounts. Scope to least-privilege, log actions comprehensively, build behavioral baselines. Adversaries are already targeting agent credentials.
- Detect identities, not just credentials. Fabricated identities backed by $200 document packages pass most traditional verification checks.
- Invest in post-authentication behavioral monitoring. Verification is where the attack begins, not where it ends.
The full Q1 2026 KasadaIQ Threat Intelligence Report, including per-industry breakdowns and the public predictions tracker, is available here.
