Fake job applicant fraud - Is your candidate actually an AI fake?
Companies across every industry are now facing a new kind of fraud: fake job applications powered by AI.
Using off-the-shelf AI tools, anyone can now create extremely realistic fake resumes, professional headshots, online portfolios, and LinkedIn profiles in seconds. Combined together, it looks like an ideal candidate for your open roles.
We experienced this firsthand recently. A candidate passed our screening process and even aced a few interviews to get to the final stage of our interview process. But something felt off, so I suggested meeting the candidate in person. That's when the candidate immediately declined and deleted their LinkedIn profile. Not exactly normal candidate behavior, right?
We're not alone in this. Gartner predicts that about 1 in 4 job applicants will be fake by 2028, but based on internal data and recent chats with customers, these numbers are very conservative and companies are already seeing hundreds of suspicious or clearly fraudulent job applications each week. Considering the speed at which AI has been evolving, these numbers are only going to continue to grow.
What started as a hiring problem is quickly becoming a security issue. CISOs are increasingly flagging fake job applicant fraud as a new attack vector into the enterprise. Unlike traditional cyberattacks that target systems from the outside, fake hires attempt to bypass controls by entering through trusted internal access.
Why are fraudsters targeting job applications?
Most large companies have systems in place to detect financial fraud and cybersecurity attacks. But their hiring processes have been optimized to filter out candidates who aren’t a good fit for a role, not to identify fraudulent candidates. This makes job applications an easier target.
If a fraudster gets hired, they can steal your data and IP, defraud your business and customers, enable fraudulent operations using employee credentials, or engage in salary arbitrage by falsely claiming to live in high-cost areas.
The most concerning threats come from organized rings targeting courier and delivery roles to steal valuable goods, and nation-state actors attempting to infiltrate software engineering roles for IP theft and system exploitation.
It's not just traditional fraudsters either. People in the "overemployed" movement are using these same tactics to land multiple remote jobs simultaneously, maximizing their earning potential while deceiving employers about their availability and commitment.
How to detect a fake job application
While these scams are getting more sophisticated, there are still signals that can help you spot fake or high-risk candidates during your hiring process.
Watch for resume inconsistencies
Real candidates often have small discrepancies as they update profiles at different times, but fake profiles tend to be either perfectly consistent (because they were created all at once) or have mismatches between their LinkedIn profile and employment dates, job titles, and resume gaps.
Look for AI-generated content
Resumes created by AI usually have a formulaic structure, overuse certain phrases like "results-driven" or "detail-oriented," and contain an unusual number of em dashes (—). Tools like QuillBot can tell you the likelihood that a resume or cover letter has been written by AI. But be careful not to automatically disqualify these candidates, as many legitimate job seekers use AI tools to improve their resumes. The key is differentiating between using AI and AI-generated content that doesn’t match identity or behavior signals.
Pay attention during video interviews
Poor video quality isn't always a red flag, but unnatural lighting, lip-syncing issues, or candidates who avoid turning their head can be signs of potential deepfake use. However, these tools are getting a lot harder to detect. We’ve seen recent cases where the user can wave a hand in front of their face, change lighting, and make quick movements without any glitching.
Notice behavioral red flags
Be cautious with candidates who are unusually eager to start immediately, accept offers without any negotiation, or act suspiciously when you suggest meeting in person or changing interview formats. Genuine candidates typically have questions about the role, company culture, or growth opportunities.

Why human review isn’t enough
Skilled fraudsters know how to bypass individual checks. But the real challenge isn’t in spotting a single red flag in isolation, but in correlating signals across identity, device, network, and behavior to detect large-scale or coordinated abuse.
This means validating whether a candidate’s claimed location aligns with their IP address, device characteristics, and network behavior. It requires detecting candidates using VPNs, location spoofing techniques, or virtual machine environments designed to mask true origin. It should also include identifying signs of automation, such as scripted keyboard and mouse patterns, as well as recognizing when the same device or environment is being used to submit more than one application.
Finally, effective detection requires validating that core identity signals, such as phone numbers, email addresses, and supporting identity data, consistently point to the same real person, rather than a stitched-together or fabricated identity.
How Sardine can help you detect fake job applicants
Sardine helps companies detect fake job applicants by embedding fraud and identity signals directly into the hiring process, without disrupting legitimate candidates or your existing workflows.
During job applications
Sardine integrates seamlessly into applicant tracking systems and hiring platforms to assess risk the moment while an application is being submitted, and continuously throughout the hiring lifecycle. At this stage, the focus is on validating the information candidates provide and identifying inconsistencies early, before they progress further.
Sardine evaluates signals like:
- Identity consistency: Do the applicant’s email address and phone number belong to a verifiable individual? Have these identifiers been consistently used together before? Do they align with trusted external records?
- Email risk signals: Is the email address newly created? Is the domain reputable? Has the email been associated with prior abuse?
- Device environment: Is the same device submitting multiple applications? Is the candidate using a setup designed to hide their identity?
- Bot and automation detection: Was the application completed too fast for a human? Was information entered automatically rather than typed or selected manually?
- Location integrity: Are the IP address, device signals, and network data consistent with the applicant’s stated location? Are there signs of IP masking, proxy usage, or location spoofing?
During job interviews
Once an application has passed your screening process, Sardine continues to assess risk during live interviews. At this stage, the focus is on confirming that the person on the screen is the same individual who applied, and that they’re not misrepresenting their identity or location.
Sardine checks for candidates that may be:
- Masking their true location with a proxy or VPN: Whether the candidate is masking their location using a proxy or VPN. Sardine can pierce these tools to reveal a user’s true location and IP address, so you can confirm it matches what they claim.
- Using video manipulation and deepfakes: If the webcam feed is being altered using deepfake tools, or there’s a virtual camera being used to insert a pre-recorded or fabricated video feed.
- Exhibiting odd or suspicious behavior patterns: Maybe the candidate is frequently switching between windows, minimizing the interview, or interacting with off-screen applications in ways that suggest scripted assistance, prompt injection, or real-time coaching.
In this demo, you can see the signals that Sardine used to flag this fake candidate. Though the candidate was able to create a convincing surface-level profile, they can’t fake the underlying technical fingerprints.
Automate checks using your own hiring logic
You can also create custom rules within Sardine that automatically run the checks your hiring team would otherwise have to perform manually.
Using Sardine’s data, you can flag when a candidate’s location, device, browser, or operating system changes between application and interview, detect reuse of the same identity or device across multiple applications, or escalate interviews when suspicious behavior appears.
This automates review, reduces manual effort, and ensures every candidate is evaluated consistently using the same criteria.
The cost of getting it wrong
For CISOs and hiring teams, the risk is clear: a single fraudulent hire can bypass months of security hardening and walk straight into privileged internal access. But with the right detection in place, you'll never have to wonder if your next great hire is actually a sophisticated fraudster.
If you’d like to learn more about protecting your hiring process, reach out to us at sardine.ai/contact.


%20(1).png)

.png)





