History is filled with individuals or groups using the element of surprise to establish a competitive advantage. Often these change in tactics or unusual strategies have been employed by the underdog. Forced to rethink the status quo, their innovative nature allows them to catch their stale, old, complacent competitor off guard.

The story of David and Goliath serves as a great example. The overconfident Goliath was taken by surprise when his smaller, weaker opponent used a slingshot in the battle. Goliath was heavily reliant on his physical strength and was not prepared for this tactic, and he paid the ultimate price.

The challenge of protecting web applications serves as a modern-day example. The battleground between the attackers and the protectors has been established through the use of traditional tactics. The typical protector’s toolkit is filled with these reactive tools:

  • Web application firewalls – to protect against known malicious attacks
  • IP/Network/User agent blocking – to reactively disrupt or slow down an attack
  • Rate limiting  to control the consumption of resources
  • Geo-blocking – to limit the playing field
  • Basic bot detection – captcha, simple JS inspection.

Attackers are prepared for this battle. For each defensive (reactive) tactic they have developed evasion strategies:

  • WAF rules will only protect against known malicious payloads  so attackers either invest time in discovering an exploit that is not checked or they leverage a benign payload
  • IP/Network blocking is evaded when attackers simply rotate through a proxy network
  • Rate limiting can be evaded when attackers simply control the request/IP rate and distribute the attack more broadly
  • User agents, or any header attribute, are an unreliable detection method as they can easily be spoofed
  • Geo-blocking is inherently inaccurate and limited by the defender’s need to allow access to their customer base, providing attackers plenty of scope for access
  • Captcha and basic JS inspection are both easily avoided via various mechanisms.

When an attacker focuses on your website, they are expecting you to defend with the status quo. They are prepared with all the tools to evade these detections.

Kasada regularly sees this in action and provides the element of surprise that catches attackers off guard.

Here are a few examples of Kasada disrupting attacks.

Example 1: A vulnerability scan

This Kasada customer was recently targeted by vulnerability scan. The customer positioned Kasada between a BYO CDN and their application. The CDN provided all the basic tools yet 79% of the attack made it through to Kasada.

Our ability to detect malicious automation at the source of the attack completely disrupted the attack.


Example 2: An application DDoS attack

Another customer also positioned Kasada between a BYO CDN and their application. The CDN provided all the basic tools, yet 95+% of an attack made it through to Kasada.

The attack was clearly crafted to evade traditional detection. It was a benign POST request and broadly distributed across a large number of proxies.

Kasada’s ability to detect malicious tools prevented 100% of the attack from hitting the customers application.


Example 3: Data scraping

Kasada’s advanced visibility allows us to observe attackers building bots.

In this recent example, an attacker used a headless browser to launch an attack. When it failed, the attacker then opened a normal browser and the developer console, and inspected the requests as he manually browsed the target site.

Thinking that they couldn’t see anything unusual, the attacker launched the headless browser again, only for it to fail.

You can see the attacker attempting to counter Kasada’s element of surprise.


Example 4: Residential Proxy Networks

These networks are the thorn in the traditional defender’s side. Attackers combine the power of automation software with a highly distributed network of residential proxies, driven by a proxy rotation service. This results in each individual IP sending a single request.

The only way to defeat this deceptive behaviour is to detect the bot on the first request.

Anything less provides the assailant with an opportunity to successfully attack your site.

This attack consisted of 1.2 million IP addresses each sending an individual request.