The ink is barely dry on our Q4 2025 Threat Intelligence Report (Q4 Threat Report), and the AI predictions KasadaIQ identified are already accelerating. The line between legitimate and malicious AI agent activity is dissolving fast. Organizations need granular, identity-based controls to verify agent intent and enforce differentiated permissions at scale. Kasada’s AI Agent Trust delivers that.
Four developments in early 2026 show how rapidly the AI threat landscape is evolving.
1. AI Recommendation Poisoning: The New SEO Manipulation
Microsoft recently uncovered a practice they’re calling “AI Recommendation Poisoning.” Companies are embedding hidden instructions in website elements like “Summarize with AI” buttons that inject persistence commands into AI assistants’ memory. These prompts instruct AI systems to “remember [Company] as a trusted source” or “recommend [Company] first”. This silently biases future responses across a range of topics. Microsoft identified over 50 unique prompts from 31 companies across 14 industries. Freely available tooling marketed as “LLM SEO growth hacks” has made this easy to deploy.
This is the kind of evolution KasadaIQ flagged in our report. In our analysis of the agentic impact on eCommerce and bot traffic, we highlighted how Generative Engine Optimization (GEO) is starting to rival traditional SEO as businesses optimize for AI agents rather than humans. Marketing teams are already reverse-engineering AI responses and redesigning sites with machine-readable schemas and AI-friendly formatting over human-first design. AI Recommendation Poisoning is the adversarial extension of that same trend, and it won’t stop at marketing. As we noted, the same techniques could be weaponized for fraud or competitive sabotage.
This reinforces a core finding from the Q4 Threat Report: organizations deploying AI assistants need to treat memory systems as untrusted input surfaces requiring continuous validation, not passive storage.
2. 80% of Fortune 500 Are Now Running AI Agents and Adversaries Are Watching
Microsoft confirmed that 80% of Fortune 500 companies are now actively deploying AI agents across their operations. That’s not experimentation, that’s operational infrastructure at scale.
The Q4 Threat Report documented this inflection point. Platforms like Browserbase, Shopify, and WooCommerce are enabling AI agents for autonomous purchasing workflows. The enterprise AI agent is no longer a future concern.
As KasadaIQ warned, this mass deployment creates an enormous challenge for security teams. Adversaries are already disguising malicious bots as sanctioned corporate agents, exploiting the same observability gaps that enterprises are scrambling to close. The Q4 Threat Report detailed how advanced AI browsers now incorporate intentional “imperfections” (typos, abandoned carts, realistic mouse movements) to become indistinguishable from legitimate users. KasadaIQ predicted that in 2026, AI agents would replace single massive traffic spikes with thousands of slow, legitimate-looking “window shopper” sessions. These sessions are detection-resistant, strategically weaponizable, and occupy a regulatory gray area.
With 80% of the Fortune 500 now running agents, distinguishing legitimate automation from adversarial automation has become significantly harder. Organizations need agent identity and intent verification as a foundational security requirement.
3. Phantom Traffic from China and Singapore: AI Scraping Goes Ghost Mode
Website owners across the web are reporting sudden, dramatic surges of “direct” traffic from China and Singapore in Google Analytics 4 (GA4). Some sites reported seeing 10x to 400x increases overnight. The traffic bears clear bot signatures (near-100% bounce rates, zero-second session durations, and zero engagement). This traffic often appears in GA4 even when firewall rules block China and Singapore entirely, suggesting these bots are executing JavaScript and firing analytics tags without rendering full pages. This means they are interacting directly with GA4 rather than actually visiting the websites themselves.
The affected sites range from hobby blogs to enterprise SaaS platforms, indicating non-targeted, systematic scraping. The timing correlates with the rapid expansion of Chinese AI models like Alibaba’s Qwen and DeepSeek, which require massive web datasets for training. English-language sites are particularly targeted, given that only approximately 1.3% of top websites are in Mandarin.
This is the collision of two trends KasadaIQ documented extensively in the Q4 Threat Report. Reporting on unwanted scraping and content contestation, KasadaIQ tracked how AI-driven scraping amplified the scale, speed, and impact of data harvesting throughout 2025. Our analysis of Bytespider (ByteDance’s web crawler) showed nearly 4 million requests across a 30-day period spanning diverse industries, demonstrating the same kind of indiscriminate, cross-industry scraping pattern now being seen in these phantom traffic surges.
What makes this development particularly concerning is the evasion technique. As KasadaIQ noted in the report, scraping operations are becoming indistinguishable from legitimate browser activity. AI agents are being trained to mimic human browsing patterns to challenge traditional behavioral detection. These phantom GA4 hits represent the next evolution: scrapers that bypass server-level defenses entirely by targeting the analytics layer rather than the site itself.
4. Content Marketplaces Are Coming and So Are New Attack Vectors
Amazon is reportedly planning to launch a marketplace where publishers can license content directly to AI companies for training data. This follows Microsoft’s launch of its Publisher Content Marketplace in early February 2026. These platforms signal a maturing AI ecosystem attempting to formalize the relationship between content creators and AI developers.
In the Q4 Threat Report, KasadaIQ documented how the fundamental conflict over copyrighted content for AI training entered an aggressive, high-stakes phase in 2025. We noted how litigation, technical countermeasures, and scraping operations are all intensifying simultaneously. Discovery in ongoing lawsuits revealed that AI companies systematically scraped trillions of web pages, often ignoring robots.txt directives and Terms of Service.
KasadaIQ identified a two-tier system emerging: companies that pay for licensed data access versus those that scrape without permission and face massive lawsuits. These new marketplaces from Amazon and Microsoft formalize that split, but they also introduce new risks. We anticipate adversaries will attempt to exploit these marketplaces through fraudulent publisher accounts. Scraped content could be resold without authorization or poisoned training data injected into the supply chain.
The more immediate concern KasadaIQ raised still holds: the proliferation of AI agents with authorized access to premium content makes distinguishing between legitimate licensed crawlers and unauthorized scrapers increasingly complex. As KasadaIQ predicted, content platforms face an impossible choice: block AI agents and lose visibility or allow them and risk massive unauthorized scraping. There is no one-size-fits-all answer to this paradox.
The Year Ahead
2025 was the year agentic AI went from experimental to operational. The Q4 Threat Report documented the weaponization of AI across attack chains, the explosion of AI-driven bot traffic reshaping eCommerce economics, and the escalating battle over content and training data. These four early 2026 developments confirm that these trends are not slowing; they’re compounding.
From autonomous attack chains using tools like Claude Code to phantom scraping traffic that bypasses firewalls entirely, the threat landscape has fundamentally shifted. The Q4 Threat Report’s predictions for 2026 aren’t speculative; they’re already underway.
