A rejection in three milliseconds
Fatima, compliance manager at NovaBank, receives an angry email from a customer: "My loan was rejected, but nobody can tell me why." The signature – IT-system decision – sounds cold. Fatima actually knows why: a machine-learning model estimates creditworthiness based on payment behavior, neighborhood, and click traces from the mobile app. Until yesterday, this was mainly an IT story. Since the EU AI Act came into effect this year, responsibility has shifted to the business itself – and thus to teams like Fatima's.

What the law really says
The AI Act sorts financial use cases into three buckets1. Algorithms for terrorist financing or social scoring? Prohibited. Systems that determine access to basic banking, loans, or insurance? Automatically high-risk. Chatbots that only handle general questions? Low regulatory pressure, provided they're transparent.
In practice, the most commonly used AI solutions at banks, insurers, and fintechs fall into that middle category. This means: model documentation, data governance, continuous risk analyses, human oversight, and demonstrable AI literacy for everyone working with them.
High-risk in daily operations
The definitions seem abstract, but Fatima recognizes them everywhere on the floor:
- Credit scoring: The engine that approves or rejects a mortgage within seconds
- Fraud detection: The real-time transaction monitor that spits out AML alerts
- Robo-advisors: Systems that recommend savings portfolios
- Claims processing: The bot at insurers that analyzes photos and suggests partial payouts
- Dynamic pricing models: Car insurance based on telematics from the car's black box
Even the latter falls under the AI Act because it directly affects premiums and thus access to services.
Rediscovering the human dimension
"Human in the loop" sounded like tick-the-box at NovaBank for years. An employee clicked approve after the model flashed "green." Under the AI Act, that same employee must be able to explain why customer A gets a credit limit and customer B doesn't, including the role of postal code, device type, or timing.
This requires new skills: recognizing variables, seeing bias possibilities, and knowing when you may override a model.
Fatima starts with a simple experiment. She has the team search through twenty rejected files for similarities. Within an hour, they see patterns that previously went unnoticed – higher rejection rates in one specific region, remarkably low scores for freelancers in the cultural sector. The penny drops: AI literacy isn't a luxury; it's necessary to protect duty of care and reputation.

Five steps to action – without magic formulas
Step | Action | Result |
---|---|---|
1. AI mapping | Inventory all models that directly decide on loans, premiums, or transactions | Overview of name, purpose, and data sources |
2. Data chain | Check origin, representativeness, and recent updates of all data sources | Validation protocol for new sources |
3. Decision rules | Explain why certain variables count (no more black box) | Transparent explanation for customers and regulators |
4. Override procedures | Build procedures for manual interventions with logging | Feedback loop for model improvement |
5. AI literacy | Invest in continuous training for all involved teams | Competent employees who can assess models |
1. Map the AI landscape
Which models directly decide on loans, premiums, or transactions? Put name, purpose, and data sources in one overview.
2. Check the data chain
Origin, representativeness, and recent updates. For every new source: validate again.
3. Expose decision rules
No black box in board presentations. In plain language: why does mobile operating system count? Why does shopping area X get a risk uplift?
4. Build override procedures
Employees log not only that they manually intervened, but also why. That feedback feeds the retrain process.
5. Invest in AI literacy
Basic knowledge for customer advisors, in-depth sessions for risk & compliance. Not as a one-time workshop, but as a continuous learning path2.
Fatima's first results
Three months later, the quick wins are visible. The percentage of "unexplained" rejections drops, complaint handling takes less time, and the marketing department proudly uses the new transparency in campaign material: We explain how our digital assessment works.
Why it doesn't stop at compliance
The CFO sees something else happening: better insight into the models generates sharper questions for suppliers. NovaBank prunes unnecessary features, reduces license costs, and brings more expertise in-house. The risk budget shifts from firefighting to innovation.

Series outlook
This opening blog is the wake-up call. In the upcoming parts, we'll dive into:
- how real-time fraud detection falls under the AI Act,
- what fairness means for dynamic insurance premiums,
- and how asset managers organize human oversight for algorithmic investment strategies.
Always with the goal that Fatima now has clear: responsible AI use as a competitive advantage, not as a burdensome cost center.
Curious about what an AI literacy program looks like for financial teams? We build modularly: from basic sessions for customer advisors to deep dives for model validators. Feel free to send a message to exchange ideas.
🎯 Free EU AI Act Compliance Check
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.