Exposing a 150K-account fraud ring in 11 minutes with agentic AI
In the AI era, fraud infrastructure scales faster than investigation teams.
Attackers can orchestrate fraud campaigns at a different scale and with less resources than in the past. The entry bar to commit such attacks has also dropped sharply, with unskilled criminals getting access to capabilities they could only dream of just a year ago.
In one such case, a customer of ours suspected they were under attack. Registrations were passing initial checks, but something didn’t add up. Multiple accounts appeared to come from different device IDs - yet resolved to the same underlying fingerprint.
But now, fraud fighting teams are also getting geared up.
When we initially got called up to investigate, instead of manually exporting data and writing a chain of SQL queries, I handed the investigation to a Data Analyst Agent operating inside Sardine’s fraud platform.
Not to “decide” anything, but to collect, structure, and test the hypotheses I guided it with.
What follows is a timestamped breakdown of the investigation recorded on video.
The tell: Device reuse + IP masking
The investigation begins when a client of Sardine has alerted us of a suspicion that they’re under an attack by a fraud ring. They were seeing multiple registrations from different device IDs, but all had the same fingerprint on Sardine’s platform.
Looking at the individual cases, I also noticed that the IP geolocation was misleading. When I layered in our true IP detection and proxy/VPN piercing, the pattern became very evident - though the accounts registered in the US, the fraudsters were actually located in Germany and UAE.
While this can be explained on an individual basis, it was very clear that the combination of mass device fingerprint usage and IP location masking was incriminating - this was clearly a single fraud ring.
Key takeaways
Device ID is too easy to fake
Fraudsters can rotate device IDs, clear cookies, and reset environments with ease. What’s harder to manipulate is the underlying device fingerprint. If your detection relies only on surface-level identifiers, you’ll miss coordinated reuse.
Layered signals turn indicators into patterns
A fingerprint alone is weak. A geo mismatch alone is weak. When enriched device intelligence and true IP location are evaluated together, across accounts, the signal strengthens exponentially.
Fraud infrastructure becomes visible at scale
Fraud at scale optimizes for reuse, which forces fraudsters to expose their infrastructure. This can show up as hard evidence (different device IDs appear as unified fingerprints), or as fuzzy linking - like the fact all masked IPs come from the same small town.
From queries to hypotheses
Not so long ago, at this point in the investigation process I would have translated my initial findings into a series of SQL queries and began data mining.
This process would have likely taken me several hours, not including the time it would have taken me to get a feel for how the data is stored and structured.
But SQL is not a prerequisite anymore. And so are python, R, and even Excel - instead I can simply share my hypothesis with the agent and let it do all the heavy lifting.
More than that - instead of thinking about how to write code, I’m thinking about the questions I want to ask.
Key takeaways
Platform-aware agents outperform generic automation
Notice how I didn’t need to explain to the agent what fingerprints, sessions, partners, and geo signals mean in the context of our platform? That’s because we trained it to understand fraud and system primitives. That context allows it to test hypotheses, not just return rows.
Safe AI agents should be designed for human intervention
Instead of asking the agent to give me the bottom line conclusions, I’ve instructed it to also produce the chart with the plotted data. This allowed me to run a quick, visual sanity check over the agent’s conclusions.
Agentic hypothesis confirmation
In response, the agent produces a three-part analysis:
- Raw data visualization: helps to quickly notice concentration, patterns, or analysis mistakes
- Structured summary: highlight findings in accordance to my guidance and context
- Reasoning flags: bring to attention data or conclusions that were left out of the report
In a matter of seconds I could see two things. First, which partners were most affected, and second - which partners had registrations from the suspicious device fingerprint only. These partners are either breached, or in the worst case - colluding with the fraudsters.
Key takeaways
The bottleneck is no longer SQL
AI agents solve the biggest constraint in fraud investigations: being able to access, query, and analyze big data. Investigators aren’t limited by their technical skills anymore, only by their ability to form the right questions.
Guided agents outperform open-ended prompts
Notice I didn’t ask the agent “is this fraud?” or give it an open canvas to speculate. I gave it specific leads to validate - check concentration, test partner exposure, measure reuse propensity. Narrow context helps agents act as a structured analyst executing human-guided hypotheses.
Ignored data should be declared
Agents should, of course, make context-based decisions when it comes to what data to include or which conclusions to ignore. That is what separates them from rigid automation tools. But all such instances should be clearly flagged to the human investigator for sanity checking.
Uncovering the fraud infrastructure
Now that I confirmed my initial suspicions, I can use the findings surfaced by the data analyst agent to codify the pattern I’m interested in: high-usage device fingerprints that also show a high rate of IP location masking.
Up until now, I was following the breadcrumbs I’ve been given by the customer, starting with the suspicious device fingerprint. But now that I understand how the fraud ring operates, I am no longer constrained to follow a single lead.
With a simple prompt, I can simply ask the agent to find me similar patterns. Within seconds I expanded what looked to be a ring impacting a few thousand accounts to a ring that spans over 150k fraudulent entities.
Key takeaway
Agentic defense vs. agentic offense
Creating and overseeing a fraud ring that spans +150k requires automation, likely leveraging AI agents. To uncover and fight fraud at this scale and speed, fraud teams must be equipped with agentic capabilities as well.
From insight to enforcement
As I conclude the investigation, it’s now time to turn to action. Before I do, I make sure to manually review some cases from other, newly-found, suspicious device fingerprints.
Once I confirm they exhibit the same pattern (proxy IP in London, US is a strong tell), I go ahead and add these fingerprints to our block list.
Off screen, I also strengthen the customer’s defenses with some rules that are specifically designed to stop this fraud ring even if it returns with a new device fingerprint set. To do so, I mainly focused on fingerprint velocity and true IP location mismatch.
Key takeaways
Human in the loop at each stage
Before I acted on newly found device fingerprints, I again double-checked the agent’s findings. This is no different than how I’d act if I would have run the queries myself. Sanity checks are part of any secure process.
On-platform investigations streamline actions
The biggest advantage of being able to deploy solutions on the same platform I run investigations on is consistency. No need to figure out how a specific data point is called, or worse, finding out your rule engine is missing a feature you used in your queries.
The future of fraud investigations
As you just saw, in just over 11 minutes we put a stop to a fraud ring that managed to get their hands on more than 150K stolen cards.
As I was wrapping up the investigation and stopped the recording, there was one question on my mind: would this be impossible if I didn’t have access to the Data Analyst Agent?
Frankly speaking, at least when looking at the results, I doubt it. Yes, I might have missed some fingerprints if I would have done it myself. But overall I think I would have solved it in a similar fashion.
The real change? Speed. It took me more time to document my findings and solutions than actually doing the work. On my own, reaching the same results would have probably taken me half a day of analysis. Best case.
But is this the case for everyone? Here’s the uncomfortable truth: I am a privileged user. Not only do I have direct access to Sardine’s data, but I also know where to find it and how to analyze it.
And now, with the Data Analyst Agent, this is the case for every Sardine user.


%20(1).avif)







