Deepfake Screening: Liveness + Telemetry Runbook

A defense-in-depth operating model for Recruiting Ops: identity gate before access, event-based triggers, and evidence packs you can defend in an audit.

A fraud decision without a time-stamped evidence pack is not audit-ready.
Back to all posts

Real Hiring Problem

A candidate clears early screens, then a final interviewer flags video artifacts and off-camera reading. Recruiting Ops attempts to reconstruct what happened across tools and finds the usual failure mode: no single timeline tying identity, device session, and assessment behavior to the final decision. This is where SLAs break: panels get paused, time-to-offer slips, and offer-to-start fallout increases because the candidate sits in limbo. Legal exposure shows up immediately. If counsel asks, "Who decided this was fraud, on what evidence, and when?" many teams have a decision without evidence, spread across screenshots and emails. Remote-role fraud is now observable at material rates: Pindrop reports 1 in 6 remote applicants showed signs of fraud in one real-world pipeline, and Checkr reports 31% of hiring managers have interviewed a candidate who later turned out to be using a false identity. SHRM notes replacement costs can range from 50-200% of annual salary depending on role.

Why Legacy Tools Fail

Most stacks were assembled for throughput, not defensibility. The market failed to solve deepfakes and proxy test takers because workflow ownership and evidence capture are fragmented. Legacy flows are sequential: ATS stage moves, then background checks, then interviews, then assessments. Once suspicion appears, everything stalls and becomes a manual investigation. The bigger gap is auditability. Many systems cannot produce unified evidence packs with timestamps, reviewer identity, and linked artifacts. There are no review-bound SLAs for fraud queues, no standardized rubric storage tied to identity, and no ATS-anchored audit trails that show who approved access to the next step. Shadow workflows appear to compensate: DM threads, spreadsheets, exported videos. Shadow workflows are integrity liabilities. If it is not logged, it is not defensible.

Ownership and Accountability Matrix

Fraud defense scales only when ownership is explicit and tied to queues and SLAs. Recruiting Ops owns workflow sequencing, risk-tier policy rollout, SLA enforcement, and time-to-event metrics. Security owns access control and audit policy: trigger thresholds, step-up rules, evidence pack requirements, and who can confirm fraud dispositions. Hiring Managers own rubric discipline and evidence-based scoring. They should not be asked to adjudicate identity without a compiled evidence pack. Sources of truth must be explicit: the ATS for stage transitions and decisions, the verification service for identity events, and the interview-assessment layer for telemetry and artifacts with write-backs into the ATS.

  • Automate baseline liveness and device fingerprint capture, replay triggers, rubric enforcement, and immutable logging.

  • Reserve manual review for step-up cases in an SLA-bound queue with required evidence pack fields.

Modern Operating Model

Treat hiring like secure access management. Candidates should not get access to higher-trust steps without an identity gate before access. An instrumented workflow uses event-based triggers, automated evidence capture, and ATS-anchored audit trails. It correlates signals across steps because deepfakes evolve fast and no single signal is a silver bullet. Dashboards should segment integrity signals by role, source channel, geo, and time-to-event. Time delays cluster where identity is unverified, so instrumentation is how you keep throughput while increasing control. False positive management is part of the model: define what triggers review, what evidence is required, and how to clear candidates quickly without accusations.

  • Identity verification before access

  • Event-based triggers

  • Automated evidence capture

  • Analytics dashboards focused on time-to-event and queue aging

  • Standardized rubrics with tamper-resistant feedback

Where IntegrityLens Fits

IntegrityLens AI acts as the ATS-anchored control plane that turns fraud detection into enforceable gates and review queues, with evidence packs that survive audit questions.

  • Identity gate before access using biometric verification with liveness, document authentication, and face matching.

  • Defense in depth across steps using deepfake indicators, behavioral telemetry, device fingerprinting, and continuous re-authentication.

  • Immutable evidence packs with timestamps, reviewer identity, and linked artifacts for audit-ready decisions.

  • Parallelized checks instead of waterfall workflows so only flagged cases step up without stalling the funnel.

  • AI screening interviews available 24/7 with structured rubrics and behavioral signal capture, written back into the ATS.

Anti-Patterns That Make Fraud Worse

These patterns increase fraud exposure and also increase false positives because decisions become inconsistent.

  • Treat identity verification as a one-time checkbox at application intake instead of step-up verification tied to risk events.

  • Allow interview or assessment links to be forwarded without binding the session to a verified identity and device fingerprint.

  • Reject for fraud based on gut feel without an evidence pack, reviewer accountability, and a documented path to clear candidates quickly.

Implementation Runbook

1

Define risk tiers and triggers. Owner: Recruiting Ops (policy) and Security (thresholds). SLA: 2 business days. Evidence: policy version, approvers, effective date in the immutable event log.

2

Identity gate before any privileged step. Owner: Recruiting Ops. SLA: verification completed in 2-3 minutes before interview start, with access expiry after 24 hours. Evidence: liveness, document auth, face match outcomes with timestamps and session IDs.

3

Bind sessions to device fingerprints and enforce replay resistance. Owner: Security. SLA: real-time. Evidence: fingerprint hash, session reuse detection, failed challenge attempts.

4

Capture behavioral telemetry in interview and assessment. Owner: Security (schema) and Hiring Manager (rubric discipline). SLA: continuous capture. Evidence: focus loss, paste bursts, execution telemetry, rubric submission timestamp.

5

Trigger step-up verification on correlated anomalies. Owner: Security. SLA: 30 minutes in business hours, 4 hours off-hours. Evidence: trigger reason codes, correlated signals, reviewer identity.

6

Manual review queue with false positive management. Owner: Recruiting Ops (queue ops) and Security (final adjudication). SLA: 1 business day standard, 4 hours urgent. Evidence: evidence pack link, notes, disposition, escalation path.

7

Write back disposition to the ATS. Owner: Recruiting Ops. SLA: 15 minutes. Evidence: stage transition, decision owner, evidence pointer, access auto-revoked on review or confirmed fraud.

8

Weekly tuning. Owner: Analytics. SLA: weekly. Evidence: time-to-event breakdown, queue aging, trigger rates by source, false positive estimates.

  • Use the YAML policy below as the contract between Recruiting Ops and Security so triggers, SLAs, and evidence requirements are explicit.

Close: Implementation Checklist

Implementing tomorrow means prioritizing controls that reduce fraud without stalling throughput. Business outcomes: reduced time-to-hire by avoiding late-stage rework, defensible decisions backed by evidence packs, lower fraud exposure via defense in depth, and standardized scoring across teams.

  • Gate access to interviews and assessments on a completed identity event, with access expiration by default.

  • Define step-up triggers based on correlated signals, not single anomalies.

  • Stand up a review queue with owners, SLAs, and required evidence pack fields before any reject-for-fraud action is allowed.

  • Require rubric submission and evidence-based scoring before advancement to final rounds.

  • Make the ATS the single source of truth for decisions, timestamps, and evidence pointers.

  • Review weekly: queue aging, SLA breaches, trigger rates by source, and false positive management outcomes.

Related Resources

Key takeaways

  • Treat every interview and assessment like privileged access: no identity gate, no access.
  • Fraud defense is a continuous loop across events, not a single check at application time.
  • The only scalable way to manage false positives is to standardize what triggers manual review and what evidence must be captured.
  • If it is not logged, it is not defensible: decisions need timestamps, reviewer identity, and tied artifacts.
Fraud Defense Policy-as-Code (SLAs + Evidence)YAML policy

A practical contract between Recruiting Ops and Security.

Defines identity gates, telemetry requirements, step-up triggers, and what must be logged before dispositions are allowed.

fraud_defense_policy:
  version: "2025-12-20"
  owners:
    recruiting_ops: "recruiting-ops@company.com"
    security: "secops@company.com"
    hiring_manager: "hiring-managers"
  identity_gate:
    required_before:
      - "live_interview"
      - "coding_assessment"
    target_time_seconds: 180
    expires_after_hours: 24
    checks:
      - "document_auth"
      - "liveness"
      - "face_match"
  telemetry_capture:
    required_events:
      - "session_start"
      - "device_fingerprint"
      - "network_risk"
      - "focus_loss_count"
      - "paste_events"
      - "execution_telemetry"
      - "rubric_submitted"
  step_up_triggers:
    - id: "replay-suspected"
      if_any:
        - "liveness_failed"
        - "video_stream_anomaly"
      action: "step_up_verification"
      queue_sla_minutes: 30
    - id: "proxy-suspected"
      if_all:
        - "device_fingerprint_changed"
        - "voice_or_face_mismatch"
      action: "manual_review"
      queue_sla_hours: 4
    - id: "assessment-integrity"
      if_any:
        - "paste_burst_high"
        - "execution_pattern_outlier"
      action: "manual_review"
      queue_sla_hours: 24
  disposition_controls:
    reject_for_fraud_requires:
      - "evidence_pack_id"
      - "reviewer_user_id"
      - "reason_code"
    auto_revoke_access_on:
      - "manual_review"
      - "confirmed_fraud"

Outcome proof: What changes

Before

Fraud suspicions were handled ad hoc across email and spreadsheets. Final interviews were frequently paused while recruiters reconstructed identity and assessment history from multiple tools, with inconsistent evidence quality and no uniform SLA for adjudication.

After

A risk-tiered funnel was implemented: identity gate before access, step-up verification on correlated anomalies, and a single ATS-anchored evidence pack per case. Review decisions moved into an SLA-bound queue with explicit Security adjudication and Recruiting Ops workflow ownership.

Governance Notes: Security and Legal signed off because the process enforces least-privilege access to interviews and assessments, keeps an immutable event log for reviewer accountability, and documents a false positive management path that avoids unsupported fraud accusations. Controls are aligned to GDPR/CCPA-ready handling and the platform runs on Google Cloud infrastructure audited for SOC 2 Type II and ISO 27001.

Implementation checklist

  • Define risk tiers (standard vs step-up) and map them to required verification events.
  • Set review-bound SLAs for each fraud queue and assign owners (Recruiting Ops vs Security).
  • Require an evidence pack before a reject-for-fraud decision is allowed.
  • Instrument device, session, and behavior signals and correlate them across interview + assessment.
  • Write the decision back into the ATS with timestamps and reviewer accountability.

Questions we hear from teams

What is the minimum viable control set to start catching deepfakes and proxies without slowing the funnel?
Implement an identity gate before access, capture device fingerprints and core behavioral telemetry by default, and route only correlated anomalies into a manual review queue with a defined SLA and required evidence pack.
How do we avoid false accusations while still acting quickly?
Use a two-stage model: automated step-up verification on triggers, then manual adjudication only when the evidence pack meets required fields. Require reason codes, reviewer identity, and documented clearing criteria so candidates can be resolved quickly and consistently.
What should be the source of truth for fraud dispositions?
The ATS should be the system of record for stage transitions and final dispositions, with pointers to the identity and telemetry evidence pack. The verification system remains the source of truth for identity events, but decisions must be written back into the ATS with timestamps.

Ready to secure your hiring pipeline?

Let IntegrityLens help you verify identity, stop proxy interviews, and standardize screening from first touch to final offer.

Try it free Book a demo

Watch IntegrityLens in action

See how IntegrityLens verifies identity, detects proxy interviewing, and standardizes screening with AI interviews and coding assessments.

Related resources