Picture this: you interview someone over video, they’re articulate, qualified, and nail every question. You extend an offer. Week one, you realize that person never existed. The face was AI-generated. The voice was synthetic. This is hiring in 2026.
“The Age of Paranoia” is the new reality, where verifying someone is a real human has become as important as evaluating their skills. And IT is getting hit the hardest. Remote work, competitive pay, and access to sensitive data make it a prime target.
The Numbers Speak for Themselves
According to the Federal Trade Commission, job and employment-related scams nearly tripled in just four years, from $90 million in reported losses to $12.5 billion in 2024. And that’s just what gets reported; the real number is likely much higher.
The FBI has documented over 300 US companies that unknowingly hired operatives using deepfakes and stolen identities. In fact, Gartner predicts that one in four job candidates will be fake by 2028.
One in four.
The impact goes beyond dollars. We’re talking about wasted time vetting fake candidates, damaged client relationships when contracted work doesn’t get done, security vulnerabilities, and delayed projects. Fraud like this erodes the trust we all depend on.
Verification Is the New Vetting
Hiring managers are creating their own verification tactics because they have to.
They quiz candidates on the city listed on their resume. Favorite coffee shops, neighborhood spots, where they grab lunch. The logic is that someone who actually lives there should be able to answer quickly. Some use the “phone camera trick,” asking candidates mid-interview to point their phone camera at their laptop screen to check if deepfake software is running. Others request a timestamped selfie mid-call or ask candidates to send an email mid-conversation.
Traditional hiring signals are breaking down. AI generates polished resumes, deepfakes pass video interviews, and references get fabricated. In IT, where credentials have always mattered, surface-level verification isn’t enough anymore, and the threat is especially real in roles where someone could have access to systems, code, or sensitive data.
I understand why people are doing this. Traditional hiring signals are breaking down. AI generates polished resumes, deepfakes pass video interviews, and references get fabricated. In IT, where credentials have always mattered, surface-level verification isn’t enough anymore. The threat is real, especially in roles where someone could have access to systems, code, or sensitive data.
Where This Backfires
These verification tactics might stop fraud, but they’re creating new problems for legitimate candidates.
Nobody wants to start a job relationship feeling like they’re under suspicion. When you ask someone to prove they live where they say they live, or to give you a virtual tour of their home office, it doesn’t feel like respect. And the IT professionals you actually want to hire? They have options. They’ll go where they’re not being interrogated.
These approaches can also screen out qualified candidates who recently relocated, people who aren’t great at thinking on their feet but excel at their work, or digital nomads. And they put an impossible burden on recruiting teams. Whether it’s internal HR, staffing partners, or hiring managers doing it themselves, turning everyone into fraud investigators isn’t sustainable or scalable.
Where We Go From Here
At ConsultNet, we’ve shifted from “can you prove who you are?” to “can you demonstrate what you say you can do?” For IT roles, this is critical. Technical skills can be tested in ways that are genuinely hard to fake, and that’s where the industry needs to go.
What we’re seeing work:
Work simulations built around actual job requirements. Not generic coding tests, but custom assessments that mirror real challenges. If you’re hiring for AWS cloud development, test whether someone can actually write Lambda functions for event-driven architecture. If you need a data engineer, see if they can handle your specific pipeline work.
Noninvasive, multi-layered verification. The best approaches combine skills testing with authentication that happens in the background, checking IP consistency, device behavior, and work patterns without making candidates feel like they’re under surveillance.
Performance data that builds confidence. Instead of relying on gut instinct from a 30-minute interview, look for systems that provide comprehensive proof points: how someone approaches problems, how their work progresses over time, whether their skills match what they claim.
The ultimate goal is to build verification into the process so naturally that it protects everyone. Clients get confidence in who they’re hiring, candidates don’t feel interrogated, and recruiting teams can focus on matching talent instead of playing detective.
At ConsultNet, we’re taking this holistic approach, combining real-world assessments with built-in fraud detection so our clients can hire with confidence and speed without sacrificing quality.
Final Thoughts
AI is getting more sophisticated, remote work isn’t going anywhere, and the people trying to game the system are getting better at it too. With predictions that a quarter of job candidates will be fake within the next few years, this isn’t a problem we can ignore or solve with ad-hoc verification tricks.
The companies that figure this out won’t be making candidates jump through awkward verification hoops, rather they’ll be designing hiring processes where trust is earned through performance not paperwork.
In a world where anyone can look qualified with the right prompts, the ability to actually do the work is becoming the only signal that matters.
Exactly the way it should be.






