LayersRank
Technical Whitepaperv1.0

The Science Behind
Confidence Scoring

How TR-q-ROFNs transform candidate evaluation from guesswork into quantified, uncertainty-aware decisions.

Pages

20

Audience

Technical

Domain

Science & Research

Published

2025

Abstract

Traditional interview scoring produces single numbers that hide uncertainty. A candidate who scores 74 might be a solid 74, or might be anywhere from 60 to 88 depending on which evaluator you ask. This paper introduces a mathematical framework for quantifying evaluation confidence using Type-Reduced q-Rung Orthopair Fuzzy Numbers (TR-q-ROFNs). We explain the theoretical foundations, demonstrate practical applications in hiring, and present validation data showing how confidence-aware scoring improves hiring decisions.

1

Introduction: The Problem with Traditional Scoring

1.1 The Illusion of Precision

When an interview panel reports that a candidate scored 74 / 100, this number carries an implicit claim of precision — that we know, with confidence, this candidate’s quality is exactly 74.

This is almost never true. The 74 is typically an average of subjective judgments that may vary significantly. Interviewer A might have given 82. Interviewer B might have given 66. The average is 74 — but what does that number actually mean?

1.2 Sources of Evaluation Uncertainty

Evaluator variance

Different evaluators, applying the same criteria, reach different conclusions. Research shows 15–25% disagreement rates between panels evaluating identical candidates.

Question ambiguity

Some questions elicit clearer signals than others. A technical implementation question produces more reliable signal than a hypothetical behavioral question.

Response ambiguity

Candidate responses vary in clarity. Some clearly demonstrate competence; others are genuinely ambiguous depending on interpretation.

Context dependence

The same response can indicate different things for different roles or contexts. Evaluation criteria are not perfectly transferable.

1.3 The Cost of Hidden Uncertainty

When scoring systems hide uncertainty, decision-makers cannot calibrate their confidence appropriately. They treat a contested 74 the same as a solid 74, leading to:

  • False confidence in marginal decisions. Borderline candidates get the same treatment as clear cases.
  • Inability to prioritize investigation. Without knowing which scores are uncertain, evaluators cannot focus follow-up on the right candidates.
  • Poor audit trails. After-the-fact review cannot distinguish reliable decisions from lucky guesses.
  • Systematic bias toward extremes. Aggregation of divergent opinions produces moderate scores that mask underlying disagreement.

1.4 Our Approach

Rather than reporting “74,” we report “74 ± 4, 87% confidence” — providing decision-makers with the information they need to interpret scores appropriately. The framework is based on Type-Reduced q-Rung Orthopair Fuzzy Numbers (TR-q-ROFNs), a mathematical structure that naturally represents partial and uncertain information.

2

Theoretical Foundations

1965·Zadeh

Classical Fuzzy Sets

Elements have membership degree μ ∈ [0, 1]. A candidate might have μ = 0.74 membership in the "strong candidate" set. Better than binary, but cannot represent confidence in the assessment.

1986·Atanassov

Intuitionistic Fuzzy Sets

Assigns each element a pair (μ, ν) where μ = membership, ν = non-membership, and μ + ν ≤ 1. The quantity π = 1 − μ − ν represents hesitation or indeterminacy.

Limitation: The constraint μ + ν ≤ 1 restricts expressiveness — cannot represent strong simultaneous evidence both for and against.
2017·Yager

q-Rung Orthopair Fuzzy Sets

Relaxes the constraint to μᵍ + νᵍ ≤ 1, where q ≥ 1. This dramatically expands the representable space for complex, conflicting signals.

2.4 – 2.5 Why q = 2 (Pythagorean Fuzzy Sets)?

With q = 2, the constraint becomes μ² + ν² ≤ 1 — a unit circle. This allows representing conflicting signals that are impossible under classical intuitionistic sets:

Intuitionistic (q=1) — Invalid

μ = 0.8, ν = 0.5

0.8 + 0.5 = 1.3 > 1 ✗

Pythagorean (q=2) — Valid

μ = 0.8, ν = 0.5

0.64 + 0.25 = 0.89 ≤ 1 ✓

In candidate evaluation, we often encounter conflicting signals — strong technical skills (high μ) alongside concerning communication patterns (moderate ν). Pythagorean fuzzy sets represent this naturally.

3

The TR-q-ROFN Framework

3.1 Definition

A Type-Reduced q-Rung Orthopair Fuzzy Number is a triple:

A = (T, F, R)

T

Truth

Evidence supporting positive evaluation

F

Falsity

Evidence supporting negative evaluation

R

Refusal

Degree of evaluation uncertainty

subject to: T² + F² + R² ≤ 1

3.3 Key Insight: Refusal as a Decision Signal

Why R matters

Traditional scoring has no analog to R — it forces a verdict even when evidence is ambiguous. The Refusal degree provides a principled way to say “we’re not sure.”

  • R > thresholdTriggers adaptive follow-up questions to probe the uncertainty
  • High RRoutes case to experienced human evaluators for review
  • Any RAdjusts final decision confidence to reflect actual reliability

3.4 Computing T, F, R from Multiple Models

LayersRank evaluates each response using three complementary models, each producing its own (T, F, R) triple:

Semantic

Embedding-based comparison to reference responses

Lexical

Keyword and structure analysis

LLM

Reasoning quality assessment

T_agg = Weighted avg of individual T values, adjusted for model agreement

F_agg = Weighted avg of individual F values, adjusted for model agreement

R_agg = √(1 − T² − F²) × (1 + σ) where σ is normalized std-dev across models

3.5 Score and Confidence Derivation

Score

S = 100 × T / (T + F + ε)

ε prevents division by zero

Confidence

C = 1 − R

Direct from refusal degree

Interval

± (1 − C) × k

k = 10–15 scaling factor

Worked Example

Input

T0.75
F0.15
R0.20

Output

Score83.3
Confidence80%
Interval± 3

83 ± 3  ·  80% confidence

4

Application to Candidate Evaluation

4.1 Multi-Dimensional Assessment

Technical

40%
  • System design
  • Debugging
  • Depth of knowledge
  • Trade-off reasoning

Behavioral

35%
  • Communication
  • Collaboration
  • Feedback response
  • Team dynamics

Contextual

25%
  • Role understanding
  • Motivation
  • Career trajectory
  • Culture alignment

4.2 Question-Level Scoring

Q1 — “Walk through your approach to system design...”

ModelTFR
Semantic0.820.100.15
Lexical0.780.120.18
LLM0.850.080.12
Aggregated0.820.100.16

Score: 89 ± 2  ·  84% confidence

4.3 Adaptive Follow-Up Trigger

Q2 — “Tell me about a time you received critical feedback...”

High R
ModelTFR
Semantic0.550.400.35
Lexical0.620.350.30
LLM0.480.450.40
Aggregated0.550.400.35
R = 0.35 > 0.25

Follow-up triggered

Follow-up: “You mentioned initially feeling defensive. What helps you move past that reaction now?”

After follow-up

T0.72
F0.18
R0.18↓ from 0.35

Score

80 ± 3

Confidence

82%

↑ from 65%

R reduced

50%

4.4 – 4.5 Dimension and Final Score Aggregation

DimensionScoreConfidenceWeight
Technical83 ± 385%0.40
Behavioral78 ± 480%0.35
Contextual81 ± 388%0.25

Final Score

80.6 ± 3 84% confidence

5

Implementation Architecture

5.1 System Overview

Input

Candidate Response

Evaluation Pipeline

Semantic Model

(T, F, R)

Lexical Model

(T, F, R)

LLM Model

(T, F, R)

TR-q-ROFN Aggregation

Compute aggregated (T, F, R) · Check R against threshold

R ≤ 0.25

Score + Confidence

Report result to decision-maker

R > 0.25

Trigger Follow-Up

Re-evaluate with targeted question

5.2 Model Specifications

Semantic Model

SBERT (sentence-transformers)

Computes cosine similarity between candidate response embedding and reference response embeddings.

T: Similarity to positive reference responses
F: Similarity to negative reference responses
R: Distance from all references

Lexical Model

TF-IDF with domain-specific vocabulary

Identifies presence of expected concepts, structure, and keywords.

T: Coverage of expected elements
F: Presence of concerning patterns (vagueness, contradiction)
R: Coverage uncertainty (partial matches)

LLM Model

Instruction-tuned LLM (configurable)

Holistic evaluation of response quality, reasoning, and depth.

T: Explicit quality assessment on rubric
F: Explicit concern identification
R: Model's stated confidence / hedging language

5.3 Threshold Configuration

R ThresholdBehavior
0.15 – 0.20More follow-ups · higher confidence · longer assessments
0.25 defaultBalanced approach · ~20% of responses trigger follow-up
0.30 – 0.40Fewer follow-ups · faster assessments · more score uncertainty
6

Validation and Results

2,847

Candidate responses

across 12 role types

3

Expert assessors

per response, trained

r = 0.83

Overall correlation

Pearson, expert vs model

6.2 Score Correlation

DimensionPearson rSpearman ρ
Technical0.840.81
Behavioral0.790.76
Contextual0.820.79
Overall0.830.80

6.3 Confidence Calibration

Stated ConfidenceActual AccuracyCalibration Error
90 – 100%93%+3%
80 – 90%84%+1%
70 – 80%73%−2%
60 – 70%65%−1%
< 60%58%+2%

6.4 Adaptive Follow-Up Effectiveness

Avg R (uncertainty)

Before

0.38

After

0.19

−50%

Avg Confidence

Before

62%

After

81%

+31%

Expert correlation

Before

0.71

After

0.86

+21%

6.5 Comparison to Traditional Scoring

MethodAgreementFalse +False −
Traditional (avg score)78%12%10%
TR-q-ROFN high confidence91%4%5%
TR-q-ROFN (all cases)84%8%8%
7

Limitations and Future Work

7.1 Current Limitations

Model dependence

TR-q-ROFN quality depends on underlying model quality. Poor base models produce poor T, F, R values regardless of aggregation method.

Threshold sensitivity

The R threshold is currently set empirically. More principled approaches to threshold selection are desirable.

Dimension independence assumption

Current implementation treats dimensions independently. Cross-dimension correlations are not modeled.

Cold start

Reference responses for new roles require initial human effort. Transfer learning across similar roles is an area for development.

7.2 Future Work

Dynamic threshold adjustment

Learn optimal R thresholds per role, question type, or candidate population.

Uncertainty decomposition

Distinguish aleatory uncertainty (inherent randomness) from epistemic uncertainty (lack of information).

Longitudinal validation

Correlate evaluation scores and confidence with post-hire performance outcomes.

Fairness analysis

Examine whether R distributions differ across demographic groups in ways that could introduce bias.

8

Conclusion

Traditional interview scoring hides critical information about evaluation reliability. A score of 74 tells you nothing about whether that assessment is trustworthy.

TR-q-ROFNs provide a mathematical framework for making uncertainty explicit. By representing evaluations as (T, F, R) triples — capturing evidence for, evidence against, and evaluation uncertainty — we enable:

01

Appropriate confidence calibration

Decision-makers know when to trust scores and when to investigate.

02

Adaptive assessment

Uncertainty triggers follow-up questions that resolve ambiguity.

03

Audit trails

Every score has a documented confidence level and evidence basis.

04

Improved decisions

High-confidence scores are significantly more predictive of expert consensus.

For organizations seeking to move from gut-feel hiring to evidence-based decisions, confidence-aware scoring is a foundational capability.

9

References

  1. 1.

    Atanassov, K. T. (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems, 20(1), 87–96.

  2. 2.

    Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology. Psychological Bulletin, 124(2), 262–274.

  3. 3.

    Yager, R. R. (2017). Generalized orthopair fuzzy sets. IEEE Transactions on Fuzzy Systems, 25(5), 1222–1230.

  4. 4.

    Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3), 338–353.

10

Appendix: Mathematical Proofs

A.1

Constraint Satisfaction

Theorem: For any valid TR-q-ROFN (T, F, R) with q = 2, the derived score S and confidence C satisfy 0 ≤ S ≤ 100 and 0 ≤ C ≤ 1.

Proof

Given T² + F² + R² ≤ 1 and T, F, R ∈ [0, 1]:

S = 100 × T / (T + F + ε)

Since T ≥ 0 and T + F + ε > 0, S ≥ 0.

Since T ≤ T + F + ε, S ≤ 100.

C = 1 − R

Since R ∈ [0, 1], C ∈ [0, 1]. □

A.2

Aggregation Consistency

Theorem: The weighted aggregation of multiple TR-q-ROFNs produces a valid TR-q-ROFN.

Proof

Let (T₁, F₁, R₁), ..., (Tₙ, Fₙ, Rₙ) be valid TR-q-ROFNs with weights w₁, ..., wₙ where Σwᵢ = 1.

T_agg = Σ(wᵢ × Tᵢ)

F_agg = Σ(wᵢ × Fᵢ)

R_agg = √(1 − T_agg² − F_agg²) × adjustment_factor

By convexity of the unit ball under L² norm, (T_agg, F_agg) lies within the feasible region. R_agg satisfies the constraint by construction. □

A.3

Confidence Calibration Property

Theorem: Under reasonable model assumptions, C = 1 − R is calibrated: P(correct | C = c) ≈ c.

Proof

The aggregated R reflects model disagreement. High disagreement (high R) occurs when:

1. The response is genuinely ambiguous

2. The models are uncertain

In both cases, the probability of the score matching ground truth decreases.

Empirical calibration (Section 6.3) confirms this property holds in practice. □

For questions about this research or to discuss enterprise deployments, contact info@the-algo.com

© 2025 LayersRank by The Algorithm. All rights reserved.