Industry News | 8/15/2025

AI's Hiring Headache: 25% of Job Applicants Could Be Fakes by 2028

A new report warns that by 2028, one in four job applicants might be fraudulent, thanks to AI tools making it easier to create fake profiles. This trend poses serious challenges for employers, from hiring risks to cybersecurity threats.

AI's Hiring Headache: 25% of Job Applicants Could Be Fakes by 2028

So, picture this: it’s 2028, and you’re scrolling through job applications like you’re swiping through a dating app. But instead of finding your next great hire, you’re faced with a shocking statistic from Gartner—one in four of those applicants is likely a fraud. Yeah, you heard that right. That’s a staggering 25% of job seekers who might be pulling the wool over your eyes.

Now, let’s break this down a bit. The rise of AI tools is kinda like opening Pandora’s box for job applicants. You know how easy it is to create a fake Instagram account? Well, it’s just as easy for someone to whip up a polished resume or a shiny LinkedIn profile that looks like it was crafted by a professional. Imagine someone generating a resume that not only lists impressive job titles but also has a professional headshot that looks like it belongs on a magazine cover. It’s wild, right?

But wait, it gets crazier. Some folks are even stealing real identities to apply for jobs. Think about it: someone could snag the credentials of a real professional, apply for a job, and you’d never know until it’s too late. This is especially tricky in today’s remote work world, where you might never meet a candidate face-to-face. Even video interviews, once a solid way to gauge someone’s authenticity, are now at risk thanks to deepfake technology. Imagine interviewing someone who looks and sounds like a perfect fit, only to find out later that they’re a complete fraud.

A recent Gartner survey found that around 6% of candidates admitted to some form of interview fraud. That’s not just a small number; it’s a warning sign. And with AI evolving at lightning speed, that number could skyrocket.

So, what happens if you hire one of these fraudulent candidates? Well, it’s not just about making a bad hire. We’re talking about serious financial implications. Think about all the time and money spent on recruitment, screening, and onboarding. Now imagine that fake employee sitting at a desk, potentially stealing sensitive company data or installing malware. Yeah, that’s a nightmare waiting to happen.

In fact, there have been documented cases where organized groups, even state-sponsored actors, have used fake identities to secure jobs, funneling money back to support nefarious activities. The U.S. Justice Department has even uncovered networks of North Korean operatives doing just that. It’s like a spy movie, but it’s happening in real life.

And it doesn’t stop there. Hiring a fraudster can tank team morale, disrupt productivity, and tarnish your company’s reputation. Imagine your team finding out that someone they’ve been working with isn’t who they say they are. It’s a recipe for disaster.

Now, human resources departments are on the front lines of this battle against applicant fraud. Traditional screening methods? Yeah, they’re becoming outdated. Recruiters need to sharpen their skills to spot red flags—like inconsistencies in a candidate’s resume or weird behavior during interviews.

Some companies are even going back to basics, reintroducing in-person interviews for remote roles. Just the thought of an in-person meeting can send a fraudulent applicant running for the hills. Tech giants like Google and Cisco are leading the charge here, recognizing that a little face-to-face interaction can go a long way in verifying a candidate’s identity.

But here’s the kicker: the AI industry isn’t just sitting back and watching. They’re stepping up to the plate with new verification tools. We’re talking AI-driven background checks, deepfake detection software for video interviews, and even biometric verification methods. Some companies are getting creative, embedding “Easter eggs” in job descriptions to catch automated application bots. It’s like a game of cat and mouse, where both sides are trying to outsmart each other.

This whole situation raises a serious trust issue in the hiring process. Employers are becoming more cautious, but candidates are also worried about being judged unfairly by AI systems. A Gartner survey revealed that only 26% of candidates believe AI will evaluate them fairly. That’s a huge trust deficit that makes the hiring process even trickier.

In the end, Gartner’s prediction is a wake-up call for businesses and the tech sector. The rapid growth of AI has created a new threat to hiring integrity. Companies need to step up their game with multi-layered verification strategies that combine tech solutions with good old-fashioned human oversight. For the AI industry, it’s a chance to innovate and create tools that not only combat fraud but also build trust in the hiring process. Navigating this new landscape is gonna require a team effort to stay ahead of those looking to exploit these powerful technologies for their own gain.