
Zahed Ashkara
AI Compliance Expert
Need help with AI governance?
Get in touch for a free consultation about AI governance and compliance for your organization. Or visit our knowledge platform for more insights.

AI Compliance Expert
Get in touch for a free consultation about AI governance and compliance for your organization. Or visit our knowledge platform for more insights.
Laden...
Imagine a municipality deploying an AI system to assess social benefit applications. Or a health insurer considering algorithms for risk profiling on life insurance policies. Before they flip the switch, the EU AI Act demands something fundamental: a rights impact assessment. Not as a box-ticking exercise, but as a serious analysis of what could go wrong for the people affected.
Article 27 of the AI Act introduces the Fundamental Rights Impact Assessment (FRIA). It is a new instrument designed specifically for AI systems, and it goes beyond the familiar DPIA from the GDPR. In this article, we walk through all five paragraphs of Article 27, explain who is affected, and provide a practical template you can start using today.
Not every organisation using AI needs to perform a FRIA. Article 27 targets three specific categories of deployers of high-risk AI systems1:
Important: the obligation does not apply to AI systems used as safety components in the management of critical infrastructure, such as road traffic, water supply, gas, heating, or electricity (Annex III, point 2)1.
The first paragraph is the foundation. Before deploying a high-risk AI system, the organisations listed above must perform an assessment of the impact on fundamental rights. The assessment must consist of six elements1:
(a) Process description: a description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose.
(b) Period and frequency: a description of the time period and frequency with which the high-risk AI system is intended to be used.
(c) Affected persons and groups: the categories of natural persons and groups likely to be affected by its use in the specific context.
(d) Specific risks: the specific risks of harm likely to impact the persons or groups identified under (c), taking into account the information provided by the provider pursuant to Article 13.
(e) Human oversight: a description of the implementation of human oversight measures, according to the instructions for use.
(f) Measures when risks materialise: the measures to be taken if the risks actually occur, including arrangements for internal governance and complaint mechanisms.
The obligation applies to the first use of the AI system. In similar cases, you may rely on previously conducted FRIAs or existing impact assessments prepared by the provider. However, once you determine that any of the elements from paragraph 1 has changed or is no longer up to date, you must update the assessment1.
In practice, this means a FRIA is not a one-off exercise. It is a living document that evolves alongside changes in usage, context, or the system itself.
After completing the FRIA, you must notify the market surveillance authority of the results. You do this by submitting the completed template (see paragraph 5) as part of the notification. Organisations falling under Article 46 paragraph 1 may be exempt from this notification obligation1.
This paragraph is particularly relevant for organisations already conducting a Data Protection Impact Assessment (DPIA) under Article 35 GDPR or Article 27 of Directive 2016/680. If you have already completed a DPIA, you do not need to start from scratch. The FRIA complements the existing DPIA13.
In practice, this means you can combine both assessments into a single document, as long as you add the AI Act-specific elements (such as fundamental rights risks beyond privacy) to what you already have. This avoids duplicate work and provides a coherent overview of all risks.
The AI Office will develop a template in the form of a questionnaire, potentially supported by an automated tool, to help deployers comply with their obligations1. At the time of writing, this template has not yet been published. Nevertheless, you can start preparing now. The six elements from paragraph 1 form the backbone of every FRIA.
Based on the legal text, academic research by Mantelero2, the guide from ECNL and the Danish Institute for Human Rights48, and the ALTAI checklist from the European Commission5, you can already build a workable template. Below is a structure you can start using immediately.
Answer the following questions:
Assess the potential impact for each relevant fundamental right from the EU Charter:
Use the information that the provider is required to supply under Article 13 (transparency obligations).
Many organisations already conduct DPIAs for processing activities with high privacy risk. The FRIA and DPIA overlap partially, but the FRIA goes broader. Where a DPIA focuses on risks to personal data, a FRIA examines the full spectrum of fundamental rights: discrimination, access to justice, freedom of expression, social rights3.
The good news: Article 27 paragraph 4 explicitly allows you to combine the FRIA with an existing DPIA. You do not need to create two completely separate documents. Add the fundamental rights analysis to your existing DPIA and you satisfy both obligations.
The obligation to conduct a FRIA takes effect from 2 August 2026 for most high-risk AI systems. That may seem far away, but preparation takes time. You need to set up internal processes, assign responsibilities, and gather the right information from your AI providers.
Moreover, the ECNL/DIHR report4 demonstrates that a FRIA is more than a compliance checkbox. Done properly, it helps you genuinely understand what your AI systems do to people's rights. That is not only legally required, it is simply good practice.
Article 27 introduces a specific fundamental rights assessment for AI systems that goes beyond existing instruments. The FRIA requires public organisations, providers of public services, and certain financial institutions to think carefully about the impact of their AI on citizens' rights before deployment. With the template in this article, you can get started today. The official template from the AI Office will follow, but the six elements from the law are already set in stone.