HR & Recruitment
CV screening and automated candidate selection are explicitly high-risk under the EU AI Act. Bias in hiring algorithms can lead to discrimination, fines up to 35 million euros and irreparable reputational damage. Since February 2025, AI literacy is already mandatory for everyone using these tools.
AI applications in this sector
CV screening
Automated assessment and ranking of CVs based on keywords, experience and competencies. Explicitly high-risk under Annex III, point 4(a). Research repeatedly shows bias on gender, ethnicity and age.
Automated candidate selection
AI systems that select, filter or rank applicants for interviews. High-risk under Annex III. Requires transparency toward candidates about the use of AI in the selection process.
Performance monitoring
AI tools that measure, assess or predict employee performance. High-risk when used for promotion, termination or compensation decisions (Annex III, point 4(b)).
Workforce planning
Predictive models for staffing needs, turnover and capacity planning. Risk classification depends on the extent to which decisions about individual employees are influenced.
Employee surveillance
Monitoring software that tracks employee behavior (keystrokes, screen captures, location). Intersects with GDPR, works council legislation and the EU AI Act. Employees have a right to information and consultation.
High-risk classification
The EU AI Act (Regulation 2024/1689) classifies the following HR applications as high-risk in Annex III:
Recruitment and selection
Annex III, point 4(a)AI systems used for recruitment or selection of natural persons, in particular for placing targeted job advertisements, analyzing and filtering applications, and evaluating candidates. This covers all automated tools in the recruitment process.
Employment-related decisions
Annex III, point 4(b)AI systems used for making decisions affecting the employment relationship, in particular decisions on promotion, termination, task allocation based on behavior or personality traits, and performance monitoring.
Specific challenges
Bias in hiring algorithms
Amazon already stopped an AI recruiter in 2018 that systematically scored women lower. Bias in training data leads to discriminatory outcomes. The EU AI Act requires you to actively test and mitigate bias (Article 10). The burden of proof lies with you as the user.
Transparency toward candidates
Article 50 requires that persons are informed when they are subject to an AI system. Candidates must know that AI is used in their selection process. Many recruitment tools do not yet do this by default.
Works council and co-determination
In the Netherlands, the works council has consent rights for regulations regarding personnel assessment and employee surveillance (Works Councils Act Article 27). The introduction of AI in HR directly affects this. Involve the works council early.
Responsibility as deployer
Even if you use an external recruitment tool (Indeed, LinkedIn Recruiter, HireVue), you are responsible as a deployer under the EU AI Act. You must understand how it works, set up human oversight and monitor bias.
Our approach for HR & Recruitment
HR compliance requires a combination of legal, technical and organizational measures. We map all AI tools in your HR chain and ensure your organization is compliant, including alignment with the works council.
Compliance Quickscan
AI Literacy Training (Article 4)
Governance Framework
Every day you wait, a hiring algorithm runs without oversight.
CV screening and candidate selection directly affect people's livelihoods. The EU AI Act takes that seriously. In a free 30-minute intake we map out which HR tools you use and what the risk classification is.
Book your free intakeNot satisfied after the Quickscan? You pay nothing.