AI literacy roadmap 2026: how to build evidence based AI literacy

Zahed AshkaraAI Compliance Expert
10 minutesAI literacyApril 25, 2026
AI literacy roadmap 2026: how to build evidence based AI literacy

2026 is the year AI literacy has to become evidence based

Many organizations have now done something with AI literacy. A webinar. A prompt training. An internal awareness session. Sometimes even an e-learning module with a certificate. That is a start, but in 2026 it is no longer enough.

Article 4 of the AI Act does not ask for an inspiration session. It requires providers and deployers of AI systems to take measures, to their best extent, to ensure a sufficient level of AI literacy among staff and other people who operate or use AI systems on their behalf1.

That wording may sound cautious, but the practical implication is clear. An organization must be able to explain why specific people need specific knowledge, how that knowledge is built, how it relates to the AI systems being used and how the level of literacy is maintained.

The question for 2026 is therefore not: have we delivered AI training? The better question is: can we show that our people can recognize, use, assess and challenge AI responsibly in their own work context?

What AI literacy is, and what it is not

The Dutch Data Protection Authority describes AI literacy as knowledge, skills and understanding of the technical functioning of AI systems, but also of their social, ethical and practical aspects2. That matters, because many organizations still interpret AI literacy too narrowly.

AI literacy is not only knowing how ChatGPT works. It is also knowing when an AI system is being used, what risks come with it, which data should not be entered into tools, when human review is required and when an employee should stop and escalate.

For an HR team, that means something different than for a marketing team. For a legal team, something different than for IT. For management, something different than for people who work with AI output every day. That is why the Dutch regulator advises organizations to approach AI literacy strategically and over multiple years2.

The roadmap for 2026

A mature AI literacy roadmap has six steps. Not because every organization needs the same program, but because every organization needs the same sequence: first understand where AI is used, then define who needs to know what, then train, embed and document.

Step 1: make AI use visible

Do not start with training. Start with inventory.

Which AI systems are already used by the organization? Do not only think of large machine learning systems. Think also of Microsoft 365 Copilot, ChatGPT Enterprise, recruitment software, customer service chatbots, analytics tools, document generation, marketing automation and suppliers that have embedded AI into their products.

Record at least the following for each system:

  • where the system is used
  • who works with it
  • what data goes into it
  • what output comes out
  • whether people make decisions based on that output
  • which people or groups may be affected
  • whether the system may qualify as high-risk AI

Without this inventory, AI literacy becomes generic. And generic training is exactly what will not be convincing in 2026.

Step 2: segment employees by role and risk

Not everyone needs the same level of knowledge. Article 4 explicitly refers to technical knowledge, experience, education, training, the context of use and the persons or groups on whom AI systems are used1. That means organizations need a role based approach.

A practical model works with four levels:

LevelAudienceWhat they must be able to do
FoundationAll employeesRecognize AI, use it safely, spot risks and follow internal policy.
Role basedHR, marketing, legal, finance, customer serviceApply AI risk thinking to their own processes, data and decisions.
GovernanceManagement, compliance, privacy, securityOrganize AI policy, risk classification, oversight, documentation and escalation.
ExpertData, IT, product owners, AI teamsAssess technical limitations, bias, monitoring, evaluation and lifecycle controls.

This prevents two mistakes at once. It prevents everyone from receiving training that is too shallow, and it prevents non-technical employees from being overwhelmed with details they do not need.

Step 3: build a curriculum with three layers

A mature AI literacy program has three layers.

The first layer is general foundation knowledge. What is AI? What is generative AI? Where are the limitations? Which data should not be entered into tools? When does a human need to review the output?

The second layer is legal and organizational context. Think of the AI Act, GDPR, information security, copyright, confidentiality, procurement rules and internal policy.

The third layer is practical application by function. An HR employee practices with job descriptions, selection criteria and bias. A lawyer practices with contract analysis, source checking and professional secrecy. A manager practices with decision making, governance and acceptance criteria for AI use cases.

The Dutch regulator emphasizes that the required knowledge depends on the context in which an AI system is used and on the risks involved2. A curriculum without context is mostly compliance theatre.

Step 4: make it auditable

In 2026, evidence becomes more important. Not because the law prescribes one specific certificate, but because an organization must be able to show that it has taken appropriate measures.

That evidence does not have to be complicated, but it does need to be systematic. Think of:

  • an AI literacy policy or action plan
  • an overview of roles and required knowledge levels
  • training records per employee
  • test results or practical assignments
  • certificates or participation records
  • periodic repetition and updates
  • documentation of improvement actions

The guidance from the Dutch Data Protection Authority also points toward a multi-year action plan that helps organizations address AI literacy sustainably34. That is exactly the difference between a loose training and a governance program.

Step 5: connect AI literacy to AI governance

AI literacy does not work when it sits outside the rest of the organization. Employees can only act responsibly when they know the procedure.

That is why the roadmap should be connected to:

  • the AI register
  • risk classification
  • procurement and supplier assessment
  • privacy and security reviews
  • incident reporting
  • human oversight
  • policy for generative AI
  • management reporting

An employee who recognizes AI risks but has nowhere to go becomes uncertain. An employee who knows where to report, which checklist applies and who decides becomes part of the control system.

Step 6: repeat every quarter

AI literacy is not an annual compliance exercise. Tools change, processes change, employees move roles and new guidance appears. The European Commission also emphasizes that different parts of the AI Act apply in phases, with important obligations for high-risk AI in August 2026 and August 20275.

A workable rhythm for 2026:

  • quarter 1: inventory, role model and foundation training
  • quarter 2: department specific training, policy and evidence
  • quarter 3: focus on high-risk processes, suppliers and human oversight
  • quarter 4: evaluation, audit preparation and planning for 2027

Not everything needs to be perfect on day one. But it must be visible that the organization is learning structurally.

Where organizations get stuck

The biggest mistake is assigning AI literacy only to HR or Learning and Development. That seems logical, because it is about knowledge building. But AI literacy also touches compliance, privacy, security, legal, IT, procurement and business ownership.

The second mistake is starting with tooling before policy. That creates scattered training, certificates and modules, but no coherent evidence that fits the organization's own AI risks.

The third mistake is skipping management. Yet leaders are the ones who need to understand which AI risks are material, what governance is required and where the organization is vulnerable.

The practical outcome

A good AI literacy roadmap produces four outcomes.

Employees understand what AI is and where the limits are. Teams recognize risks in their own work. Management gains insight into progress and vulnerabilities. And the organization has evidence that AI literacy is not just promised, but actually organized.

That is the bar for 2026. Not because regulators will visit every organization tomorrow, but because AI is already too deeply embedded in work processes to leave it at awareness alone.

Start small, but start structured

The best first step is not a big training campaign. The best first step is a simple baseline assessment: which AI do we use, who uses it, what risk belongs to it and what knowledge level is required?

From there, you can build a roadmap that fits your organization. Not generic. Not only inspirational. But evidence based, role based and repeatable.

Want to know where your organization stands? Embed AI helps organizations with an AI literacy quickscan, role based training and a concrete action plan for 2026.

Sources

[1]European Union(2024)Article 4: AI literacy. EU Artificial Intelligence Act.
[2]Dutch Data Protection Authority(2026)AI-geletterdheid. Autoriteit Persoonsgegevens.
[3]Dutch Data Protection Authority(2025)Aan de slag met AI-geletterdheid. Autoriteit Persoonsgegevens.
[4]Dutch Data Protection Authority(2026)Verder bouwen aan AI-geletterdheid. Autoriteit Persoonsgegevens.
[5]European Commission(2026)AI Act. Shaping Europe's digital future.
Zahed Ashkara

Zahed Ashkara

AI Compliance Expert