AI risk assessment and categories

Understanding the risk classification of AI systems according to the EU AI Act

45 minutes

Introduction to AI Risk Categories

The EU AI Act introduces a risk-based framework for regulating artificial intelligence systems. This lesson explores how AI systems are categorized based on their potential risks, and what obligations apply to each category.

The Risk-Based Approach

The EU AI Act categorizes AI systems based on the level of risk they pose to health, safety, and fundamental rights. This approach aims to ensure that regulatory requirements are proportionate to the potential harm of each system.

Core Principle

The higher the risk an AI system poses, the stricter the requirements and obligations it must meet. This ensures that innovation is not unduly restricted for low-risk applications, while still providing robust protections for high-risk scenarios.

Risk Factors

  • Intended purpose of the AI system
  • Sector of use (e.g., healthcare, law enforcement)
  • Impact on fundamental rights
  • Potential for harm to health and safety
  • Level of autonomy in decision-making

Four Risk Categories

  1. Unacceptable risk (prohibited)
  2. High risk (strict requirements)
  3. Limited risk (transparency obligations)
  4. Minimal risk (voluntary compliance)

Unacceptable Risk AI Systems

The EU AI Act explicitly prohibits certain AI applications that are deemed to pose an unacceptable risk to people's rights and safety.

Social Scoring Systems

AI systems used by public authorities for evaluating or classifying individuals based on their social behavior or personal characteristics, leading to detrimental or unfavorable treatment.

Example: A system that assigns citizens scores based on their behavior in public spaces, affecting their access to services.

Biometric Categorization Systems

Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes (with limited exceptions).

Example: Facial recognition cameras in public squares that identify individuals in real-time without specific legal authorization.

Emotion Recognition

AI systems that use emotion recognition in the workplace and educational institutions.

Example: A system monitoring employees' facial expressions to determine their emotional state and work performance.

Exploitation of Vulnerabilities

Systems that exploit vulnerabilities of specific groups due to age, disability, or specific social or economic situations.

Example: An AI system designed to target elderly people with misleading information based on their cognitive vulnerabilities.

High-Risk AI Systems

High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. They are permitted but must comply with stringent requirements.

Categories of High-Risk AI

  • Critical infrastructure (e.g., transport)
  • Educational or vocational training
  • Employment and worker management
  • Access to essential services
  • Law enforcement
  • Migration, asylum, and border control
  • Administration of justice
  • Democratic processes (e.g., voting)

Key Requirements

  • Risk assessment and mitigation
  • High-quality data governance
  • Technical documentation
  • Record-keeping and traceability
  • Transparency for users
  • Human oversight
  • Accuracy, robustness, and security
  • Conformity assessment

Examples of High-Risk AI Systems:

Recruitment AI: Systems that screen job applications or evaluate candidates during interviews.

Credit Scoring: AI used to evaluate creditworthiness or determine access to financial services.

Medical Devices: AI systems used for diagnosing diseases or recommending treatments.

Critical Infrastructure: AI controlling traffic systems, energy distribution, or water supply.

Limited Risk AI Systems

Limited risk AI systems are those that present specific transparency concerns. These systems are subject to lighter obligations than high-risk systems.

Key Transparency Requirements

The main obligation for limited risk AI systems is to ensure that users are aware they are interacting with an AI system. This enables individuals to make informed decisions about whether to continue using the system or how to interpret its outputs.

Examples of Limited Risk AI Systems:

Chatbots

AI-powered virtual assistants must disclose to users that they are interacting with an AI system rather than a human.

Deepfakes

AI-generated or manipulated image, audio, or video content must be labeled to indicate it was artificially created or altered.

Emotion Recognition Systems

Systems that identify or infer emotions or intentions must inform users that such processing is occurring.

Biometric Categorization

Systems that categorize individuals based on biometric data must inform users when such categorization is taking place.

Minimal Risk AI Systems

The vast majority of AI systems fall into the minimal risk category. These systems present little to no risk to rights or safety and are subject to the lightest regulatory touch.

Regulatory Approach

Minimal risk AI systems are allowed to operate with no specific legal obligations under the EU AI Act. However, developers are encouraged to follow voluntary codes of conduct and adhere to general AI best practices.

Examples of Minimal Risk AI Systems:

AI-enabled video games that create responsive non-player characters.

Spam filters that automatically detect and sort unwanted emails.

Smart home applications that learn user preferences for lighting or temperature.

Basic recommendation systems for content such as movies or music.

Text and grammar correction tools that improve writing quality.

Weather prediction algorithms that forecast local weather conditions.

Risk Assessment in Practice

Organizations developing or deploying AI systems need to conduct risk assessments to determine which category their systems fall into and what obligations apply.

Key Steps in AI Risk Assessment

  1. Inventory your AI systems - Identify all AI applications in use or development within your organization.
  2. Analyze intended purpose - Define what each system is designed to do and in which sectors it will operate.
  3. Check against explicit categories - Review the EU AI Act's lists of prohibited and high-risk systems.
  4. Assess potential impacts - Evaluate effects on health, safety, fundamental rights, and other protected interests.
  5. Determine risk category - Based on the analysis, assign each system to the appropriate risk category.
  6. Implement required measures - Apply the corresponding obligations for each system based on its risk level.
  7. Document the process - Maintain detailed records of your risk assessment methodology and conclusions.

Ongoing Responsibilities

Risk assessment is not a one-time activity. The EU AI Act requires continuous monitoring and reassessment, especially for high-risk systems:

  • Regular reviews throughout the system's lifecycle
  • Updates when there are significant changes to the system
  • Reassessment when new risks are identified
  • Monitoring of real-world performance and incidents

Summary

The EU AI Act's risk-based approach provides a framework for regulating AI systems proportionally based on their potential impact:

  • Unacceptable Risk - Prohibited systems that threaten people's rights and safety
  • High Risk - Systems with significant impact that must meet strict requirements
  • Limited Risk - Systems with transparency obligations to inform users
  • Minimal Risk - The majority of AI applications with voluntary compliance

Understanding these risk categories is essential for organizations to navigate compliance with the EU AI Act. By applying the risk-based framework, organizations can ensure their AI systems are developed and deployed responsibly while meeting their regulatory obligations.