Stop Identity Farming in Hiring Assessments: An Ops Playbook
Identity farming turns your assessment funnel into a credentialing service for fraudsters. This playbook shows how Support and CS leaders can detect and contain it without slowing legitimate candidates.

Identity farming is a correlation problem. Fix it with continuous risk scoring and step-up checks at the moments that matter.Back to all posts
The identity farming pattern you can actually detect
Treat identity farming as a repeatable pattern: one real person creates a "clean" verification once, then reuses their face and voice to shepherd many accounts through assessments. The tell is correlation, not a single failed check. You are looking for clusters: repeated device fingerprints, shared networks, unusually consistent completion timing, and session concurrency that is hard for a genuine candidate to reproduce. Start with passive signals because they are low friction and fast. Step-up verification should be a response to risk, not a default tax on every candidate.
Device reuse across different applicant profiles (same fingerprint, same OS and browser build).
Network overlap that is statistically weird for your funnel: same IP, same ASN, same VPN exit patterns.
Behavioral sameness: near-identical time-to-first-keystroke, copy-paste bursts, tab switching cadence.
Concurrency: multiple candidates active in assessments within overlapping windows from the same device or network.
A step-by-step architecture to stop farming without slowing hiring
Implement this as a Risk-Tiered Verification ladder. You want high throughput for low-risk candidates and deterministic containment for clustered risk. Step 1: Establish an "Assessment Identity" checkpoint. Bind the assessment session to a verified candidate identity with a short re-auth at assessment start. Do not rely on the earlier application verification alone because farming often happens after the initial gate. Step 2: Score passive signals continuously. Update risk during the assessment, not only before it. Verification is a state that can change when the device, network, or behavior changes. Step 3: Trigger step-up checks on clusters. When you detect correlation (same device across multiple applicants, or concurrency), require stronger proof at the moment of value: before score submission or before results are released to the hiring team. Step 4: Add a controlled retake policy. Identity farming thrives when retakes are unlimited and unstructured. Allow retakes only through Support-mediated workflows tied to Evidence Packs and reason codes. Step 5: Define fallbacks for real candidates. If ID scan fails or a camera is unavailable, route to an accessibility-safe alternative (manual doc review, scheduled live verification) with SLA targets so Support does not become the bottleneck. Step 6: Close the loop with hiring decisions. If a candidate is flagged, do not let managers "work around" the system. Provide an explicit waiver mechanism that logs who approved, why, and what evidence was reviewed.
Tier 0 (low risk): passive signals only, no extra friction.
Tier 1 (moderate): quick liveness re-check at assessment start (face + voice), confirm continuous session.
Tier 2 (high): liveness re-check at start and at score submission, plus device and network lock during session.
Tier 3 (critical cluster): block auto-scoring release, require live proctored verification or supervised retake.
Favor cluster-based triggers over single-signal triggers to reduce false positives.
Set an upper bound on manual review volume by gating which cases enter the queue (for example, only Tier 2+). This is an illustrative control, not a benchmark.
Use reason codes that map to candidate-facing explanations: "shared device cluster," "concurrent sessions," "liveness mismatch," "network anomaly."
A concrete policy you can ship and audit
Below is a deployable policy skeleton Support can point to when candidates appeal. It encodes step-up triggers, evidence requirements, and fallbacks, and it avoids storing toxic data by referencing Zero-Retention Biometrics outputs and hashed identifiers. Treat this like configuration, not a PDF. If it is not idempotent and versioned, you will not be able to explain past decisions during disputes.
Anti-patterns that make fraud worse
These are the three patterns that reliably create both more fraud and more Support tickets.

One-time verification at application, then trust forever through interviews and assessments.
Unlimited retakes with no identity re-check and no reason codes.
Manual reviewer "gut feel" decisions without Evidence Packs or ATS-linked audit logs.
How Support resolves disputes without guessing
Your goal is consistent outcomes under pressure. Candidates will argue, managers will escalate, and Legal will ask what you knew and when. Make the Evidence Pack the unit of work. Every flagged candidate should have a single bundle: timestamps, signal summaries, verification outcomes, and the exact policy version applied. Support should never hunt across tools mid-call. Operate with an SLA-bound queue: fast initial response, clear next steps, and deterministic dispositions (clear, step-up required, supervised retake, decline). Keep communications factual and avoid accusations. You are enforcing process integrity, not litigating intent.
If only one weak signal: request Tier 1 step-up and proceed if passed.
If cluster detected (shared device across applicants): require Tier 2 step-up before releasing results.
If liveness mismatch during step-up: halt, offer supervised retake or manual review fallback.
If repeated cluster behavior after retake: decline with documented evidence and policy references.
Explain what happened in plain language: "We need to confirm the assessment was completed by the applicant."
Offer a fallback path and timeline. Ambiguity creates tickets.
Log every interaction to the ATS profile so the hiring team does not re-litigate the case.
Where IntegrityLens fits
CISOs use it to get defensible logs, Evidence Packs, and policy versioning.
Support and CS teams use it to resolve appeals quickly with clear reason codes and timelines.
The platform supports 24/7 AI interviews, 40+ programming languages for assessments, and typical end-to-end verification (document + voice + face) in 2-3 minutes.
Privacy-first controls like Zero-Retention Biometrics and 256-bit AES encryption help Legal sign off.
Sources
Checkr, "Hiring Hoax" Manager Survey (2025): 31% of hiring managers report interviewing someone later found using a false identity
https://checkr.com/resources/articles/hiring-hoax-manager-survey-2025
Related Resources
Key takeaways
- Treat identity as a continuous state across interview and assessment, not a one-time gate.
- Use passive signals (device, network, behavior) to detect farming patterns early with low friction.
- Trigger step-up verification only when risk signals justify it to protect candidate experience.
- Give Support an Evidence Pack per candidate so escalations can be resolved without guesswork.
- Design fallbacks (manual review, alternate doc capture) so real candidates are not trapped.
Versioned policy config that ties passive-signal clusters to step-up verification, defines fallbacks, and generates an Evidence Pack reference for every decision.
Designed for operator use: predictable thresholds, bounded manual review, and candidate-friendly outcomes.
version: "2026-04-11"
policy_id: "assessment-identity-v1"
scope:
stages:
- "assessment-start"
- "assessment-in-progress"
- "assessment-submit"
identity_binding:
subject_id_source: "ats.candidate_id"
session_id_source: "assessment.session_id"
require_reauth_on_start: true
reauth_method: "face+voice-liveness"
expected_duration_seconds_max: 180 # verification should complete quickly; tune per funnel
passive_signals:
collect:
- "device_fingerprint_hash"
- "ip_address"
- "asn"
- "geo_country"
- "user_agent_hash"
- "session_concurrency"
- "tab_focus_loss_rate"
- "copy_paste_events"
retention:
biometrics: "zero-retention" # store only match outcome + audit events
raw_images_audio: "none"
signal_hashes_days: 30
risk_scoring:
model: "rules-v1"
rules:
- id: "cluster-device"
when:
any:
- "device_fingerprint_hash seen_with >= 3 distinct subject_id in 14d"
- "user_agent_hash seen_with >= 5 distinct subject_id in 7d"
add_risk: 60
evidence:
include: ["device_fingerprint_hash", "subject_id_list", "timestamps"]
- id: "concurrent-sessions"
when:
any:
- "session_concurrency > 1"
- "assessment.session_overlap_minutes >= 10"
add_risk: 40
evidence:
include: ["session_ids", "overlap_window"]
- id: "network-anomaly"
when:
all:
- "asn in high-risk-asn-list"
- "geo_country != ats.declared_country"
add_risk: 25
evidence:
include: ["ip_address", "asn", "geo_country", "declared_country"]
thresholds:
tier_0_allow_max_risk: 24
tier_1_step_up_min_risk: 25
tier_2_step_up_min_risk: 60
tier_3_block_min_risk: 90
actions:
tier_0:
on_assessment_submit: "release_score"
tier_1:
on_assessment_submit:
- "require_reauth(face+voice-liveness)"
- "release_score_if_pass_else_support_queue"
tier_2:
on_assessment_submit:
- "require_reauth(face+voice-liveness)"
- "lock_device_fingerprint_for_session"
- "hold_score_for_manual_review_if_fail"
tier_3:
on_assessment_submit:
- "hold_score"
- "route_to_supervised_retake"
manual_review:
queue: "support-integrity-review"
sla:
first_response_minutes: 30
resolution_hours: 24
required_reason_codes:
- "shared-device-cluster"
- "concurrent-sessions"
- "liveness-mismatch"
- "accessibility-fallback"
max_review_minutes_per_case: 15
fallbacks:
if_camera_unavailable:
- "scheduled_live_verification"
- "manual_document_review"
if_id_wont_scan:
- "alternate_document_capture"
- "support_assisted_submission"
evidence_pack:
generate: true
attach_to: "ats.candidate_profile"
include:
- "policy_id"
- "policy_version"
- "risk_score"
- "triggered_rules"
- "verification_outcomes"
- "timestamps"
- "support_disposition"
webhooks:
idempotency_key: "${subject_id}:${session_id}:${stage}"
events:
- "verification.completed"
- "risk.tier_changed"
- "assessment.score_held"
- "evidence_pack.ready"Outcome proof: What changes
Before
Support handled frequent disputes after assessments when suspicious patterns were discovered late. Hiring managers escalated because results were already shared, and there was no single evidence bundle tying verification events to assessment sessions.
After
Risk-tiered step-up checks were triggered before scores were released for clustered-risk cases, and every flagged candidate had an ATS-linked Evidence Pack with reason codes and a clear fallback path.
Implementation checklist
- Define an "Assessment Identity" policy with clear step-up triggers and appeal rules.
- Instrument passive signals: device fingerprint, IP/ASN, session concurrency, keystroke cadence, tab focus loss.
- Create an SLA-bound review queue with reason codes and Evidence Packs.
- Add re-auth checkpoints at assessment start and at score submission for high-risk tiers.
- Publish a candidate-facing explanation for retakes and step-up checks.
- Audit log every decision and link it to the ATS profile.
Questions we hear from teams
- What is identity farming in hiring?
- Identity farming is a coordinated fraud pattern where one real person repeatedly completes verifications or assessments for many applicants, often by reusing the same device, network, and biometric presence to generate passing results.
- Why not just add strict proctoring for everyone?
- Universal strict proctoring increases false positives, candidate drop-off, and Support volume. A risk-tiered model uses passive signals to keep low-risk candidates fast while reserving higher-friction checks for clustered risk.
- What if a legitimate candidate shares a laptop or network?
- That is why cluster signals should trigger step-up verification and fallbacks, not automatic rejection. The Evidence Pack should document the exact signals and the candidate should have a clear path to confirm identity via an alternate method.
- Where should step-up verification happen to stop farming?
- Place it at assessment start (to bind the session) and at score submission for high-risk tiers (to prevent score laundering). This targets the moments of value without adding friction to the whole funnel.
Ready to secure your hiring pipeline?
Let IntegrityLens help you verify identity, stop proxy interviews, and standardize screening from first touch to final offer.
Watch IntegrityLens in action
See how IntegrityLens verifies identity, detects proxy interviewing, and standardizes screening with AI interviews and coding assessments.
