Instructions for Use: Talent Studio CV Scoring Engine

Modified on Fri, 8 May at 11:36 AM

Product: Talent Studio CV Scoring Engine

Provider: talentsconnect operations GmbH

Regulation: EU AI Act, Article 13 (Transparency and Provision of Information to Deployers)



1. Intended Purpose


This system is an advisory tool designed to assist professional recruiters in identifying skill matches between candidates and job requirements. It analyses uploaded can didate documents (CVs, cover letters, certificates), extracts professional skills, and compares them against role-specific skill requirements to produce a match score and prognosis.


The system is strictly for preparatory assessment and must not be used for automated hiring or rejection.


All hiring decisions remain entirely with the human recruiter. The system does not trigger, constrain, or automate any pipeline action (screening, interviewing, offering, hiring, or rejecting candidates).



2. Capabilities and Performance


Skill Extraction


The system extracts professional skills from candidate documents using two independent methods running in parallel:

  • Statistical extraction: Identifies skills using labor market taxonomy matching. Returns skills with taxonomy IDs and confidence scores.
  • LLM extraction (Claude Sonnet 4 via AWS Bedrock): Identifies skills including broader category inference (e.g., recognizing that "SAP SuccessFactors" implies "HRIS" capability).


Combined parsing accuracy: approximately 94% (validated against our sample database of test CVs in 6 format types: standard, table, multi-column, prose-embedded, non-standard dates, and German language).


Skill Matching


Candidate skills are matched to role requirements using:

  • Exact matching: Market Intelligence Skill ID comparison and normalized name matching
  • Semantic matching: Our model identifies related but differently-named skills. A similarity threshold of 0.60 is used; matches below this are discarded.


 Semantic matches receive a 20% confidence discount compared to exact matches.


Scoring


The final score combines:

  • Skill match score (75% weight): Cosine similarity in a role-skill vector space, weighted by skill importance (defining > distinguishing > necessary) and priority (required > preferred > nice-to-have)
  • Semantic similarity score (25% weight): Document-level similarity between candidate profile and job description (only when our semantic model is available)


Prognosis Labels


LabelScore RangeMeaning
Strong Fit>= 65%High skill coverage across all categories
Good Fit40% - 64%Solid coverage with some gaps
Stretch20% - 39%Significant skill gaps, development needed
Aspirational < 20%Minimal overlap with role requirements


Important: If any skill marked as "required" by the hiring manager is missing, the prognosis is automatically capped at "Good Fit" regardless of the numerical score.


Language Support

  1. All skills are translated to English to ensure fair comparison across languages
  2. German CVs are fully supported (OCR, date parsing, section headers, skill translation)
  3. The translation pipeline caches translations for consistency


3. Known Limitations and Risks


OCR Reliability


Scanned documents may result in lower extraction quality. If a score seems unusually low for a candidate with a strong background, verify the extracted text by reviewing the candidate's skill list in the UI. Re-uploading a text-based version of the document will produce more reliable results.


Semantic Drift


Our Semantic Model may occasionally suggest matches that are contextually similar but functionally different (e.g., "Project Management" matching "Change Management"). Semantic matches are always marked as such in the skill breakdown and receive a reduced confidence weight. Review semantic matches with professional judgement.


Non-Determinism


Small variations in LLM parsing can occur between scoring runs, particularly after model updates. The score is a snapshot at the time of scoring, not a permanent absolute value. The core scoring algorithm (cosine similarity, thresholds) is fully deterministic -- variation comes only from the extraction step.


Vocabulary Mismatch


Candidates who possess relevant capabilities but describe them using non-standard terminology may score lower than expected. The system mitigates this through broader category inference, semantic matching, and skill name normalization, but cannot fully solve the vocabulary mismatch problem. Always review the "Missing Skills" list against the original CV.


Taxonomy Gaps


Some emerging or niche professional skills may not exist in the Market Intelligence taxonomy. These skills are captured via LLM extraction with deterministic custom IDs, but may have lower matching confidence.


4. Instructions for Human Oversight (Article 14)


Recruiters MUST follow these protocols when using AI-generated scores:


Verify Missing Skills

Before deprioritizing a candidate based on a low score, manually check the "Missing Skills" list against the CV. The AI may have missed a synonym, a differently-formatted skill name, or a skill embedded in a work experience description rather than a skills section.


Exercise Override Authority

Recruiters have full authority to progress any candidate regardless of the "Prognosis" label. The score is one data point among many. Interview impressions, cultural fit, growth potential, and references are equally or more important and are not captured by the scoring engine.


Guard Against Halo Effect

Be mindful that "Strong Fit" labels can create a halo effect -- an unconscious tendency to view the candidate more favourably across all dimensions because of one positive signal. Treat the label as an invitation to look closer at the skill match details, not a reason to stop evaluating.


Guard Against Anchoring Bias

Conversely, "Stretch" or "Aspirational" labels may cause unconscious negative anchoring. A low skill match score does not mean the candidate is unsuitable -- it means the extracted skills did not strongly overlap with the defined role requirements. The candidate may have relevant experience expressed differently, or may be a strong cultural or developmental fit.


Review Score Transparency

Every score includes:

  1. Matched Skills: Skills the candidate has that the role requires (with match type: exact or semantic)
  2. Missing Skills: Role-required skills not found in the candidate's profile (with importance level)
  3. Extra Skills: Candidate skills beyond what the role requires
  4. Category Breakdown: Separate scores for defining, distinguishing, and necessary skill categories
  5. Criteria Fulfillment: How many required, preferred, and nice-to-have criteria are met


Use these details to form your own assessment rather than relying solely on the headline score.


Report Suspected Issues

If you notice patterns that suggest systematic bias or technical failure (e.g., consistently low scores for candidates from specific backgrounds, skills being missed despite appearing clearly in the CV), report immediately via the process described in Section 5.


5. Maintenance and Support


Updates

  • Security updates: Applied as needed via standard deployment pipeline
  • Taxonomy updates: Market Intelligence classification releases are tracked quarterly (current: 2026.4)
  • Model updates: Bedrock model version changes are reviewed before adoption; configuration is pinned to specific model IDs
  • Scoring algorithm changes: Documented in the Technical File and communicated to deployers


Incident Reporting

If you detect systematic bias, unexpected scoring patterns, or technical failure in the scoring engine:

  1. Document the specific case(s) with candidate IDs and observed vs. expected behavior
  2. Report immediately to datenschutz@talentsconnect.com
  3. Include screenshots of the score breakdown if possible


Reports will be reviewed within 5 business days and may trigger updates to the Risk Register maintained in the EU AI Act Technical File.


Documentation


The full technical documentation for this system, including the algorithmic design, data governance, risk register, and ALTAI self-assessment, is maintained in the EU AI Act Technical File. This is available upon request with an NDA.


---


Document History


VersionDateChanges
1.02026-04-28Initial Instructions for Use

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article