Product / Integrity Detection
Trust What You're Evaluating
Remote interviews create opportunities for shortcuts that in-person interviews don't. LayersRank tracks behavioral signals that indicate when responses might not be authentic -- so you can investigate before making decisions.
Integrity Monitor
Session Integrity — Candidate #4829
Tab Focus
No switches detected
Response Timing
Consistent with thinking
Copy-Paste
No external paste events
Answer Coherence
Style consistent across Qs
Video Presence
Single person, on-camera
AI Generation
No LLM patterns detected
The integrity challenge in remote hiring
Remote interviews transformed hiring efficiency. A process that took two weeks of scheduling now happens in 48 hours. But remote interviews also created vulnerabilities that didn't exist when everyone was in the same room.
In a face-to-face interview, you know the candidate is answering from their own knowledge. You can see their eyes aren't reading from notes. You can tell no one is whispering answers. The physical presence creates natural integrity.
In a remote interview, those constraints disappear. A candidate can have prepared answers on a second monitor. They can search technical questions. They can paste entire answers from ChatGPT. In extreme cases, someone else can take the interview entirely.
We don't believe most candidates do these things. The vast majority want to earn opportunities based on their actual abilities. But “most” isn't a hiring strategy.
The challenge:
Detecting potential integrity issues without creating a surveillance-heavy experience that treats every candidate like a suspect. The goal is appropriate visibility, not paranoid monitoring. We surface unusual patterns for your review. We don't make accusations or automatic rejections.
What we track
Four categories of behavioral signals, each with clear concern thresholds.
Paste Events
Every paste into a response field is logged: which question, content size, whether it appeared in the final response, and timing relative to question appearance.
Low concern
1-2 pastes, small content, scattered
Moderate concern
3-4 pastes, some on technical questions
High concern
5+ pastes on difficult questions, large blocks, minimal editing
Tab Switches and Window Changes
Logged when candidates navigate away: number of switches, duration, active question, and timing within the question. The analysis considers timing (immediate vs. mid-response), duration (brief vs. extended), and correlation with question difficulty.
Low concern
3-5 brief switches, scattered
Moderate concern
5-8 switches, some extended, some difficulty correlation
High concern
10+ switches, consistently extended, strongly correlated with hard questions
Typing Pattern Analysis
We analyze words per minute, speed consistency, pause patterns, backspace frequency, and time-to-first-keystroke. Authentic composition has a signature: pauses while thinking, bursts as ideas come, corrections, variable speed. Copied content has a different rhythm.
Speed anomalies: 150+ WPM with no paste events suggests undetected copy-paste (avg human: 40-50 WPM)
Consistency anomalies: Robotic same-WPM throughout a 300-word response is unusual
Pause anomalies: Complex questions with no pauses >2 seconds is unexpected
Correction anomalies: Zero backspaces in a long response is statistically unusual
Response Timing
Total time per question, thinking time before first interaction, and comparison to expected duration by difficulty. A comprehensive system design answer in 90 seconds for a difficulty-9 question is a signal worth investigating.
Abnormally fast: Response quality inconsistent with response time
Abnormally slow: Extended delays correlating with tab switches
Inverse difficulty: Easy questions taking longer than hard ones suggests selective lookup
Face verification
OptionalFor organizations requiring identity confirmation. Answers one question only: Is the person taking this interview the person who's supposed to be taking it?
Identity confirmation at start
The candidate takes a photo via webcam, compared against a government ID, previous interaction photos, or LinkedIn profile photo. Catches the scenario where someone else takes the interview on the candidate's behalf.
Periodic verification during interview
Optional brief prompts (“Please confirm you're still there”) with a 3-5 second camera capture. Catches the scenario where the legitimate candidate hands off partway through. Configurable frequency.
What face verification does NOT do
The integrity report
The integrity section summarizes behavioral signals in a clear, actionable format.
Clean Report
INTEGRITY SUMMARY
Paste Events: 2 (minor -- name field, one short phrase)
Tab Switches: 4 (brief, scattered, no pattern)
Typing Pattern: Normal variation
Response Timing: Within expected ranges
Face Verification: Confirmed
FLAG STATUS: NONE
All behavioral signals within normal parameters.
Flagged Report
INTEGRITY SUMMARY
Paste Events: 6
- Q3 (Technical): 287 chars, minimal editing
- Q5 (Technical): 412 chars, no editing
- Q8 (Technical): 194 chars, minor editing
Tab Switches: 14 (avg 38s, 9 within 10s of question)
Typing: Q5 at 127 WPM, Q8 at 143 WPM, 0 backspaces
Timing: Q5 done in 2m14s (expected 6-8m)
FLAG STATUS: REVIEW RECOMMENDED
Concerning patterns on Q3, Q5, Q8.
What flagging means (and doesn't mean)
A flag is information, not a verdict. Behavioral patterns deviated from normal in ways that warrant human review before making advancement decisions.
Possible explanations for flagged behavior
Actual integrity violation
Used external resources, had assistance, or didn't demonstrate authentic capability.
Unusual but legitimate behavior
Exceptionally fast typer. Multi-monitor setup triggering tab detection. Composed in a notes app then pasted.
Technical artifacts
Browser extensions, accessibility tools, or system configurations creating false signals.
Preparation, not cheating
Reviewed similar questions beforehand with practiced responses. Good preparation creates similar patterns.
How to handle flags
Review and proceed
Examine patterns, conclude they're not concerning, advance normally.
Review and verify
Advance but probe flagged areas live. "Can you walk me through that same problem?" If they reproduce the quality, it was likely a false positive.
Request clarification
"We noticed some unusual patterns. Can you help us understand your workflow?" Give honest candidates a chance to explain.
Review and decline
Strongly concerning patterns for a high-trust role. Document the specific behavioral patterns that informed the decision.
Privacy-First Design
What we don't do
Integrity detection is designed to surface signals, not to surveil. We explicitly avoid approaches we consider overreaching.
No automatic rejections
Flags go to human reviewers. The platform provides information. Humans make decisions.
No facial expression analysis
Research shows this is unreliable and potentially biased. Face verification is identity-only.
No voice stress analysis
This technology doesn't work reliably and creates legal and ethical concerns.
No audio monitoring
We don't listen for other voices or background sounds. Invasive, unreliable, privacy concerns.
No network traffic analysis
We don't monitor what other sites candidates visit. Only our interface.
No continuous screen recording
We see what happens in our interface. We don't surveil broader computer activity.
No keystroke logging
Typing patterns only within our response fields. No keyloggers, no external monitoring.
No post-session access
The interview ends, our visibility ends. No ongoing device access.
Our philosophy: Detect enough to surface legitimate concerns. Don't invade privacy beyond what's necessary. Treat candidates as professionals unless specific evidence suggests otherwise.
Configuring integrity detection
You control how integrity detection operates for your interviews.
Sensitivity levels
Standard
Flags clearly unusual patterns. Balanced between catching concerns and minimizing false positives. Appropriate for most roles.
Strict
Flags borderline patterns. Higher sensitivity, more false positives. For high-stakes, security-sensitive, or regulated roles.
Minimal
Only extreme anomalies. Lower sensitivity, fewer false positives. When candidate experience is paramount.
Feature toggles
Enable or disable specific tracking per your needs.
Alert configuration
Candidate disclosure
We strongly recommend transparency. The interview landing page can include a customizable disclosure:
“This interview session tracks behavioral patterns including paste events, tab switches, and typing patterns to ensure evaluation integrity. By proceeding, you acknowledge this monitoring.”
Transparency deters casual cheating, sets expectations, avoids candidates feeling surveilled without knowledge, and addresses legal disclosure requirements in some jurisdictions.
Handling flagged candidates: a framework
Review the specific patterns
What exactly was flagged? Paste events on which questions? Tab switches with what timing? Read the details, not just the flag status.
Assess severity
Single anomaly or consistent pattern? Concentrated on high-stakes questions or scattered? Does the combination tell a coherent story, or could each signal have an innocent explanation?
Consider role context
What's the cost of a false negative (advancing a cheater) vs. a false positive (rejecting an honest candidate)? High-trust roles err toward caution. High-volume roles give more benefit of the doubt.
Decide on response
Proceed normally, verify in subsequent round, request re-take under stricter conditions, or decline to advance. Match response to severity and role requirements.
Document your reasoning
Whatever you decide, note why. Protects you if decisions are questioned later and helps calibrate your process over time.
Honest Assessment
Does integrity detection work?
An honest assessment of what we know and don't know.
What we can measure
~8% flag rate
Of interviews receive some integrity flag. Of those:
~60% minor flags, typically dismissed after review
~30% moderate flags prompting verification
~10% significant flags influencing decisions
~20% false positive estimate
Based on flagged candidates who were advanced and performed well.
What we believe based on evidence
Patterns we flag correlate with concerning behavior. The combination of paste events, tab switches, timing anomalies, and typing patterns that trigger high-severity flags is unlikely to occur through innocent behavior.
Sophisticated cheating can evade detection. A determined candidate who types pre-memorized answers, uses a separate device, and has invisible help could potentially evade our detection. We catch common patterns, not every possible evasion.
Most candidates are honest. The 92% clean rate confirms this. Actual integrity violations are a small minority -- but that small minority can do real damage if undetected.
The balance: security versus experience
Integrity detection involves real trade-offs. The right balance depends on context.
Too aggressive
Creates a hostile, surveillance-heavy experience. Good candidates who value their dignity may decline. Your employer brand suffers. You lose talent to competitors who treat candidates more respectfully.
Right balance
LayersRank defaults to a balanced approach appropriate for most professional hiring. Adjust sensitivity up or down based on role requirements, candidate relationship, and organizational risk tolerance.
Too permissive
Creates exploitable gaps. Candidates who cheat gain unfair advantage. Evaluations don't reflect actual capability. You make decisions based on false information. Bad hires result.
Frequently asked questions
Do candidates know they're being monitored?
We recommend transparency. Default interview instructions include disclosure of behavioral monitoring. Candidates who proceed have acknowledged the monitoring.
Can tech-savvy candidates bypass detection?
Some evasion is possible. Using a separate device for lookups, typing pre-written answers manually, having someone else in the room whispering answers -- these might not trigger our specific signals. We catch common, easy shortcuts. If common shortcuts are risky, most people who would have cheated casually won't.
What about candidates with disabilities?
Contact us before the interview. We can adjust sensitivity settings, disable specific detection features, or provide alternative formats. Some patterns that trigger flags (unusual typing patterns, extended pauses, use of assistive technology) might be normal for candidates with certain disabilities. We configure appropriately when informed.
Is this legal?
Monitoring candidate behavior during an interview you invited them to take is generally legal, especially with disclosure. However, laws vary by jurisdiction. GDPR in the EU, CCPA in California, and various employment laws may impose requirements around disclosure, consent, and data handling. We recommend consulting your legal team. We provide disclosure templates and data handling documentation to support compliance.
What happens to integrity data?
Integrity data is retained alongside interview responses, following your configured data retention policy (typically 1-2 years). Candidates can request deletion under applicable privacy laws. We honor such requests within required timeframes. Integrity data is not shared externally or used beyond the specific hiring evaluation.
Can I see integrity data for candidates I didn't flag?
Yes. Configure reports to include integrity summary on all candidates, not just flagged ones. This gives you visibility into normal patterns, which helps calibrate your interpretation of flagged patterns.
Hire with confidence
See how integrity detection works in practice. Book a demo and we'll walk through flagged and clean report examples -- exactly what you'd see for real candidates.