I started Kasada in 2015.
Which still feels a bit ridiculous to say, because at the time I’d basically just finished high school. In between, I spent some time at Macquarie Bank, and my job there was to look for weaknesses in systems. Try to break things, understand how they could be abused, what the failure modes were.
And honestly, it was one of the best ways possible to learn. You get past theory very quickly and into reality.
I learned the internet is fundamentally adversarial. If there’s money on the line, people will find a way to exploit whatever you’ve built.
And a lot of the tools big companies were relying on back then just weren’t going to hold up. They were designed for the threats that existed years earlier, and the problem was already moving.
So I started Kasada.
The Early Days
In the beginning, the focus was pretty straightforward.
Bots were scraping sites, buying inventory before real customers could, stuffing accounts with stolen credentials, doing all of it at a speed and scale most teams couldn’t handle with their tech stack in place.
The typical response was always to introduce more friction. CAPTCHA everything, block IPs aggressively, slow everyone down.
Which sort of works, but it also makes the experience worse for legitimate users, and that always felt like the wrong tradeoff for me.
The goal with Kasada from day one was to stop attackers without punishing real customers. Make it incredibly difficult to abuse a digital business, while keeping things frictionless for the people you actually want on the site.
The first version of Kasada was built in a shipping container under the Sydney Harbour Bridge, which sounds like a founder cliché, but it was literally true.
It was a tiny team of us trying to solve a problem we were seeing firsthand.
And we weren’t trying to just build another anti-bot tool.
We wanted to shift the economics of attack and to make it so hard and so annoying to operate that fraudsters would move on.
Ten Years Later
Fast forward ten years, and Kasada has grown beyond anything I could have imagined.
Today, we protect teams at Vercel, Hyatt, Canada Goose, and hundreds of others globally.
But what’s been most interesting isn’t just the growth. It’s how much the problem itself has evolved.
Bots are still a huge part of it, and AI is obviously accelerating everything. We’re at this weird inflection point where the same tools making the internet easier to operate, such as automation, APIs, AI, are also creating entirely new vectors for abuse.
Attackers don’t just hit “buy” anymore.
They create identities, farm incentives, abuse reset flows, and exploit gift cards and refunds. Turning business logic into full-on profit centers and moving fraud upstream into accounts, incentives, and trust itself.
And even though we stopped bot automation, adversaries shifted. Abuse became more manual and targeted, and in some ways, more damaging because it’s harder to see.
Things like reseller schemes where humans are involved because the margins are still high enough. Promo abuse. Return fraud. Account takeovers. Coordinated rings that are part automation, part human effort.
That’s where we realized bot mitigation, while necessary, was no longer sufficient on its own. Some key customers also came to us wondering if we could solve some of these problems in the in between.
Why we exist
Bot mitigation is fundamentally about the edge. It answers a very important question: Is this session automated?
But modern digital risk is much more about identity and what happens over time.
The questions our customers kept asking us were: who is actually behind this? Is this the same actor coming back under different accounts? Is this guest checkout legitimate, or just the next throwaway identity in a larger fraud operation?
That’s the direction customers pushed us in, and the big one was a Fortune 100 Company that sees extremely sophisticated attacks.
We came in initially to solve their bot problem. Bots buying up sneakers to resell, and we made huge progress there.
But then they said: okay, great, now we’re seeing people doing the same thing manually because it’s still profitable enough.
In some cases, it’s even organized labour – people being paid to do parts of the fraud journey. Mechanical Turk-style operations.
And what clicked for us was that the hardest part of fraud linking behaviour back to the actor and understanding what they’ll do next.
Most systems only see fraud when it’s already cost you money. We want to catch it before it even gets the chance.
Launching the Kasada Fraud and Digital Trust Platform
The interesting thing is, we already had the hardest foundation built.
Bot mitigation requires unbelievably high-fidelity device telemetry, which reveals the truth about what’s happening on the client.
We realized that if you marry that up with user context (email, account behavior, checkout intent), you suddenly have something much more powerful.
Kasada as a platform
That’s what led us to build what we’re launching today:
Account Intelligence and the Kasada Fraud and Digital Trust Platform.
It brings together bot defense, identity continuity, fraud prevention, and threat intelligence in one place, so teams can actually see the full picture of who is on their platform, what’s happening, and what risk is building over time.
The platform is built in three layers.
Bot Defense is the foundation. Stopping unwanted automation and managing AI agents across web, mobile, and APIs — real-time, invisible, adaptive.
Account Intelligence is the fraud detection and prevention layer. It focuses on modern manual abuse patterns like promo fraud, return fraud, and account abuse. It connects device truth back to accounts, so you can understand whether this is the right user, whether an account has been resold or compromised, or whether one actor is controlling fifty identities.
KasadaIQ is the intelligence layer. We’re embedded in the communities building this stuff. We track toolchains, schemes, updates, source code, and more. The goal is to stay ahead rather than respond after the fact.
Together, these layers let teams move from isolated decisions to a complete view of risk.
What we’ve learned
Ten years of doing this teaches you a few things.
Fraud is constantly changing and it’s a back-and-forth battle.
Manual abuse is often harder to spot than an attack that’s carried out with pure automation.
And data integrity matters more than almost anything. If the signals are fake, every downstream decision is wrong.
This platform is built around clarity — connecting the dots early, giving teams context and visibility, and helping them act before revenue is lost or customers are harmed.
Looking forward
Looking back, ten years feels long, but it also feels like we’re just getting started.
I’m proud of what we’ve built.
And I’m insanely grateful to the customers who trusted us early, to the team who’s poured everything into this, and to the problems that forced us to keep going deeper.
This platform represents the culmination of ten years of learning, testing, breaking, and building.
Our best work yet.
And genuinely just the beginning.
– Sam Crowther
