The Deepfake Exam Threat: Real Risks, Rising Stakes

| Blog Articles | Share:  

What you see isn’t always what you get. 

That old adage has taken on a chilling new relevance for the assessment industry. Thanks to advances in artificial intelligence (AI), imposters can now alter their faces using deepfake technology, masking their true identities and slipping past traditional security measures. 

The use of deepfakes in testing isn’t widespread, but it’s growing fast—and the barrier to entry is lower than ever. That makes it a rising threat to online test integrity. Unlike those who use more familiar forms of exam fraud, individuals generating deepfake identities aren’t just exploiting technology and process loopholes—they’re undermining our ability to trust what we see. And in the assessment industry, trust is everything. 

Understanding the Deepfake Exam Challenge

Deep-fake exam example
An example of a full deepfake; notice the changes in the test-taker’s facial features

Deepfake technology uses AI to manipulate audio, video, and images—creating hyper-realistic digital dupes. This means people can use altered video feeds, AI-generated avatars, or real-time facial manipulation to impersonate registered test-takers. 

There are two main types of deepfakes threats to exam security:  

  • Full Deepfake: A person’s face is completely swapped with another’s in real-time (e.g., a test-taker appearing as someone else)
  • Facial Filtering: A person’s facial features or behaviors are altered (e.g., a test-taker appears to maintain eye contact when they are actually looking elsewhere) 

The Budding Threat of Deepfake Testing Fraud

You may be wondering, “Is there concrete evidence of test-takers using deepfake technology, or is this just a theoretical concern?” 

Unfortunately, it’s very real. Meazure Learning’s security team has detected approximately 150 test-takers attempting to use deepfake technology over the course of five million exams. While this percentage is admittedly low, industry experts expect these tools to become even cheaper, easier to use, and more realistic in the near future. Real-time face-swapping, voice-cloning, and lip-syncing software can now be downloaded and run on consumer-grade laptops. What once required technical expertise and expensive hardware can now be pulled off by the Average Joe, making it far more likely that bad actors will take advantage. This is particularly concerning because we’ve already seen test-takers use deep-fake technology to impersonate others, bypass facial recognition, and create synthetic identities. Yes, you read that right: synthetic identities—as if we’re living in an episode of The Twilight Zone.

How Deepfake Testing Fraud Compares to Other Forms of Cheating

deep-fake facial recognition showcase

As you can imagine, deepfake threats are among the most difficult forms of test fraud to detect. While traditional forms of proxy test-taking are still prevalent, they often leave more detectable traces. These traces—which act as digital breadcrumbs for security experts to follow—can include login patterns, device or IP reuse, unusual browser activity, remote desk software use, and more.

Deepfakes, by contrast, attack the visual layer of identity itself. That’s worrisome because it’s historically been a key anchor for test security. Think about a typical ID verification step in remote testing: A test-taker holds their ID up to their face on camera, and a proctor checks that the two match. With a convincing deepfake, the face onscreen can be synthetic—engineered to match the ID photo—even though the real person behind the screen is someone else entirely. The visual cues appear to line up, but the identity is a total fabrication.

How Deepfake Technology Is Being Used in Testing

With deepfake software costing as little as $300, a test-taker who wants to outsource their exam can now more easily—and sometimes more convincingly—enable a proxy to step in. That same ease also helps contract cheating services scale their operations. When the goal is to pass dozens or even hundreds of exams under false identities, deepfake overlays offer a way to impersonate others at a fraction of the cost of more complex fraud schemes.

High-stakes exams in both higher education and professional credentialing are seeing an uptick in deepfake attempts. English language proficiency tests, for example, are particularly vulnerable because they serve as a gateway to studying, working, or living in countries like the US, Canada, and Australia. The same applies to professional credentialing: certifications or licenses in fields like healthcare, finance, and IT can lead to immediate job eligibility or salary increases, making them a lucrative deepfake fraud target.

Furthermore, some of the tools now being misused weren’t even created with fraud in mind. Take NVIDIA’s eye contact correction feature for example. It was designed to help remote workers appear engaged on video calls, which is inclusive for people who may be uncomfortable maintaining direct eye contact. But in an online exam setting, it can mislead proctors into thinking a test-taker is focused on the screen when they’re actually reading notes or using a second device. This is the gray area that test security teams now operate in, where even well-intended technologies can be co-opted for dishonest purposes.

“Deepfake technology is evolving faster than many testing programs are prepared for. But with targeted strategies for detection and mitigation, the industry can stay ahead of a threat that doesn’t play by the rules.”

—Cory Clark, VP of Security, Training, and Compliance, Meazure Learning

Practical Strategies to Address Deepfake Risks to Test Security

The rise of deepfakes in online assessment environments presents a serious challenge—but it’s not insurmountable. Credentialing programs, higher-ed institutions, and delivery vendors can stay ahead of the threat with thoughtful, layered test security measures.

The following best practices help curb deepfake attempts and protect test integrity:

  • Pairing AI Technology With Human Oversight: Cheaters thrive on consistency. The more predictable your testing process is, the easier it is for bad actors to plan around it. Whether we like it or not, humans introduce variability—and in this context, that’s a strength. Human proctors can adapt in real time, ask unexpected questions, or request actions that disrupt deepfake overlays. When proctors and intervention specialists are trained to detect deepfake attempts and are supported by technology, the result is a reinforced security model that’s far more difficult to bypass.
  • Using a Layered Identity Verification Process: Deepfakes rely on static, predictable systems to avoid detection. That’s why verifying test-taker identity should involve multiple steps at different points in the testing journey.

    • Introduce Movement: Deepfakes perform best when the user stays still under consistent conditions. Introducing movement—like asking them to turn their head, hold up an ID, or complete a liveness check involving randomized actions or biometric cues that verify whether the person is physically present—can expose visual distortions or synthetic overlays.

    • Verify Identity Early and Often: Identity checks should start before the exam begins and continue throughout the session. Real-time face swaps and impersonation attempts are harder to pull off when verification happens at multiple touchpoints.
  • Making Post-Exam Auditing a Priority: Some test fraud slips through in real time but leaves digital footprints behind. That’s why having strong auditing teams and tools is a must. Look for solutions that include detailed session recordings and logs, anomaly detection reports, and data forensic analysis, which provide an additional opportunity to root out deepfake use. Keep in mind, however, that final judgments should never rely on software or AI alone—a human reviewer should always examine flagged sessions in full.
  • Monitoring for Behavioral, Visual, and Device Anomalies: Spotting deepfake attempts often comes down to recognizing subtle test-taking patterns. With proper training, proctors and reviewers can notice irregularities like the following that may indicate something suspicious is afoot. 

    • Behavioral Red Flags: Sudden changes in typing speed, mouse movement, or interaction style during a test session

    • Visual Red Flags: Blinking that’s too slow, facial expressions that don’t match speech, or an unwavering gaze—patterns that are more consistent with deepfake manipulation than with natural behaviors or known neurological conditions

    • Device Red Flags: Unusual CPU usage or spikes in system performance, suggesting the use of unauthorized applications or deepfake rendering tools running in the background
  • Choosing Vendors Who Understand Your Test-Takers: Partner with test delivery vendors who understand your test-taker population—both demographically and behaviorally. Factors like test-taker motivation, access to reliable technology, and region-specific fraud patterns can all influence the type of security measures that are most effective. The right vendor should be able to tailor solutions based on real-world risks and user behaviors.

Staying Ahead of the Deepfake Exam Threat

Deepfake technology will continue to evolve. That much is certain. But testing programs that act now—before the threat becomes widespread—have a clear advantage in reducing their risk and protecting the credibility of their assessments. The tools and strategies already exist. What matters now is using them with intention—and not waiting until trust has already been compromised. 

To learn more about proactive steps your program can take, check out our guide “How to Mitigate Online Exam Security Threats With Meazure Learning Solutions.