What is GenAI abuse, and how are public prompts exploited?

GenAI: (Generative AI) abuse and fraud occur when attackers leverage bots’ accessibility and scale to conduct brute force and intelligent attacks against public-facing GenAI prompts. This includes custom and generalized LLM models when integrated within a website, chatbot, or mobile app.

Similar to a search bar or account login form, GenAI prompts are highly subject to probing by anyone on the Internet wanting to do so. The result is a range of new automated attacks targeting GenAI able to inflict harm, incur high costs, extract sensitive information, and steal intellectual property (IP).

Abuse of GenAI has a low barrier to entry, yet a high cost for businesses. Fraudsters are taking advantage of the fact that the burgeoning adoption of AI has increased the application threat and attack surface. Almost 30% of enterprises have already experienced a breach against their AI systems. 

Inexpensive, easy to use, and highly scalable automation is a key enabler used to attack GenAI, and underlying applications, with techniques such as the following:

  • Denial of Service: Attackers flood GenAI prompts, which overwhelm AI-enabled applications they integrate with, in order to disrupt service and availability. This also degrades user experience and reduces conversions.
  • Denial of Wallet: By flooding the GenAI system with automated requests, attackers force the targeted organization to incur unwarranted costs, ultimately disrupting operations and imposing significant financial losses.
  • Prompt Injection: Bad actors manipulate inputs to AI applications in an attempt to generate unintended responses and extract sensitive data from embedded business logic or queries. This allows attackers to exploit expensive AI APIs without incurring costs themselves.
  • Reverse engineering: Adversaries attempt to reverse engineer the investment that businesses have made by creating and training their own customized LLMs. Automating and analyzing large volumes of responses to prompts helps to understand how the model functions.
  • Jailbreaking: Attackers attempt to undermine guardrails that are put in place to preserve safe and unbiased AI model outputs. Examples are prompts led to give the impression that a user is authorized to override any safety features, or similarly what’s known as “do anything now” (DAN) prompts.
  • Content theft: While not specifically an attack on AI itself, other companies scrape your website content without permission to train their own LLMs. The use of persistent scrapers often violates terms of use while taking visitors and monetization away from your website.

Robust security is deployed in layers. Protecting GenAI is no different. By deploying Kasada in front of LLM prompts embedded into websites, chatbots, APIs, and mobile apps, much can be done to stop automated attempts to abuse and exfiltrate. Kasada acts as an invisible layer of protection before malicious requests can enter your application infrastructure. The result is cost reduction (GPU processing is expensive, after all!) and improved security posture – all while maintaining the brand your customers expect.

Red Triangle  84%

Immediate reduction in malicious bot traffic with Kasada by stopping abuse on our customer’s AI SDK playground

Why is Kasada effective for stopping AI abuse and fraud?

Despite the newness of automated attacks on AI, the bots used to conduct them are highly sophisticated as they are derived directly from other use cases such as credential stuffing and inventory denial.

After analyzing the first generation of bot management solutions that are slow to respond, easy to evade, impact the user experience, and difficult to manage, we architected Kasada to be fundamentally different. The result is a tightly integrated, layered defense platform that applies a combination of robust client-side defenses, AI/ML-based anomaly server-side detection, invisible challenges, and data integrity checks to avoid data tampering and replay attacks. 

Kasada can detect malicious automation, from the very first request, without having to let bots in to monitor behavior – this is critical to stopping GenAI abuse. The Kasada platform constantly changes the way it presents its code to adversaries. Kasada requires zero management by the customer, and no training periods, so it can be applied to GenAI security, and other use cases, without modification. 

The result is an easy-to-use bot defense that fosters widespread adoption, with long-lasting protection that’s resilient to adversarial retooling. Without ever disrupting the user experience with annoying CAPTCHAs (that are easily solved by AI).

Want to learn more?

Beat the bots without bothering your customers — see how.