LayersRank

Legal

Our AI Principles

We believe AI in hiring must be transparent, fair, and accountable.

Our Principles

These principles guide every decision we make about how AI is used in the LayersRank platform.

01

Transparency

Every score is explainable. Every ranking decision maps to specific interview evidence, scoring criteria, and confidence levels. No black-box models that produce scores without rationale.

02

Fairness

No demographic inference. Pedigree-blind scoring that evaluates what candidates can do, not where they went to school or which company they came from. Evaluation criteria are explicit and consistently applied.

03

Human Oversight

AI recommends, humans decide. Our system provides structured evidence, confidence scores, and ranked shortlists, but final hiring decisions always rest with human decision-makers.

04

Accountability

Regular bias audits and monitoring across all scoring dimensions. We track outcomes, measure fairness metrics, and publish findings. When issues are found, we act on them.

05

Candidate Respect

Informed consent before any AI-assisted evaluation. Candidates have the right to know how they are being assessed, the right to an explanation of their results, and the right to request human-only review.

What We Don't Do

Equally important to our principles is what we explicitly refuse to build or enable.

  • No facial analysis for scoring — we never use video appearance or facial expressions as evaluation signals.
  • No demographic inference — we do not attempt to infer gender, ethnicity, age, or any protected characteristic from candidate data.
  • No penalizing non-native speakers — our scoring models are designed to evaluate substance and skill, not accent or fluency.
  • No automated rejection without human review — every rejection decision includes a human checkpoint before communication to candidates.
  • No selling candidate data — candidate information is used exclusively for the hiring evaluation they consented to. Period.

Bias Monitoring

Fairness is not a one-time checkbox. We continuously monitor and improve our models to detect and mitigate bias across all dimensions.

Regular Bias Audits

Systematic audits across gender, ethnicity, and institution. We analyze scoring distributions and outcomes to identify any patterns of unfair advantage or disadvantage.

Dataset Diversity Monitoring

We track the diversity and representativeness of our training and evaluation datasets. Underrepresented groups are specifically monitored to prevent systematic disadvantage.

Customer Feedback Integration

Our customers are partners in fairness. We actively collect and incorporate feedback about scoring outcomes, edge cases, and potential bias into our improvement pipeline.

Published Bias Reports

We are committed to publishing regular bias monitoring reports so our customers and the broader community can hold us accountable.

Coming soon

Learn about our science

Explore the research and methodology behind our scoring and ranking systems.