Understanding the risk classification of AI systems according to the EU AI Act
The EU AI Act introduces a risk-based framework for regulating artificial intelligence systems. This lesson explores how AI systems are categorized based on their potential risks, and what obligations apply to each category.
The EU AI Act categorizes AI systems based on the level of risk they pose to health, safety, and fundamental rights. This approach aims to ensure that regulatory requirements are proportionate to the potential harm of each system.
The higher the risk an AI system poses, the stricter the requirements and obligations it must meet. This ensures that innovation is not unduly restricted for low-risk applications, while still providing robust protections for high-risk scenarios.
The EU AI Act explicitly prohibits certain AI applications that are deemed to pose an unacceptable risk to people's rights and safety.
AI systems used by public authorities for evaluating or classifying individuals based on their social behavior or personal characteristics, leading to detrimental or unfavorable treatment.
Example: A system that assigns citizens scores based on their behavior in public spaces, affecting their access to services.
Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes (with limited exceptions).
Example: Facial recognition cameras in public squares that identify individuals in real-time without specific legal authorization.
AI systems that use emotion recognition in the workplace and educational institutions.
Example: A system monitoring employees' facial expressions to determine their emotional state and work performance.
Systems that exploit vulnerabilities of specific groups due to age, disability, or specific social or economic situations.
Example: An AI system designed to target elderly people with misleading information based on their cognitive vulnerabilities.
High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. They are permitted but must comply with stringent requirements.
Recruitment AI: Systems that screen job applications or evaluate candidates during interviews.
Credit Scoring: AI used to evaluate creditworthiness or determine access to financial services.
Medical Devices: AI systems used for diagnosing diseases or recommending treatments.
Critical Infrastructure: AI controlling traffic systems, energy distribution, or water supply.
Limited risk AI systems are those that present specific transparency concerns. These systems are subject to lighter obligations than high-risk systems.
The main obligation for limited risk AI systems is to ensure that users are aware they are interacting with an AI system. This enables individuals to make informed decisions about whether to continue using the system or how to interpret its outputs.
AI-powered virtual assistants must disclose to users that they are interacting with an AI system rather than a human.
AI-generated or manipulated image, audio, or video content must be labeled to indicate it was artificially created or altered.
Systems that identify or infer emotions or intentions must inform users that such processing is occurring.
Systems that categorize individuals based on biometric data must inform users when such categorization is taking place.
The vast majority of AI systems fall into the minimal risk category. These systems present little to no risk to rights or safety and are subject to the lightest regulatory touch.
Minimal risk AI systems are allowed to operate with no specific legal obligations under the EU AI Act. However, developers are encouraged to follow voluntary codes of conduct and adhere to general AI best practices.
AI-enabled video games that create responsive non-player characters.
Spam filters that automatically detect and sort unwanted emails.
Smart home applications that learn user preferences for lighting or temperature.
Basic recommendation systems for content such as movies or music.
Text and grammar correction tools that improve writing quality.
Weather prediction algorithms that forecast local weather conditions.
Organizations developing or deploying AI systems need to conduct risk assessments to determine which category their systems fall into and what obligations apply.
Risk assessment is not a one-time activity. The EU AI Act requires continuous monitoring and reassessment, especially for high-risk systems:
The EU AI Act's risk-based approach provides a framework for regulating AI systems proportionally based on their potential impact:
Understanding these risk categories is essential for organizations to navigate compliance with the EU AI Act. By applying the risk-based framework, organizations can ensure their AI systems are developed and deployed responsibly while meeting their regulatory obligations.