From smart score to duty of care – AI risks in the financial sector

Zahed AshkaraAI & Legal Expert
7 minFinance & AIMay 27, 2025
Banner image for article: From smart score to duty of care – AI risks in the financial sector

Don't miss any AI developments

Receive weekly insights about AI developments, practical applications, and relevant updates for professionals. No spam, only valuable content.

💡 Join 2,500+ professionals from various sectors who already benefit from our AI insights

A rejection in three milliseconds

Fatima, compliance manager at NovaBank, receives an angry email from a customer: "My loan was rejected, but nobody can tell me why." The signature – IT-system decision – sounds cold. Fatima actually knows why: a machine-learning model estimates creditworthiness based on payment behavior, neighborhood, and click traces from the mobile app. Until yesterday, this was mainly an IT story. Since the EU AI Act came into effect this year, responsibility has shifted to the business itself – and thus to teams like Fatima's.

Fatima, compliance manager at NovaBank, analyzes AI decisions for credit applications

What the law really says

The AI Act sorts financial use cases into three buckets1. Algorithms for terrorist financing or social scoring? Prohibited. Systems that determine access to basic banking, loans, or insurance? Automatically high-risk. Chatbots that only handle general questions? Low regulatory pressure, provided they're transparent.

In practice, the most commonly used AI solutions at banks, insurers, and fintechs fall into that middle category. This means: model documentation, data governance, continuous risk analyses, human oversight, and demonstrable AI literacy for everyone working with them.

High-risk in daily operations

The definitions seem abstract, but Fatima recognizes them everywhere on the floor:

  • Credit scoring: The engine that approves or rejects a mortgage within seconds
  • Fraud detection: The real-time transaction monitor that spits out AML alerts
  • Robo-advisors: Systems that recommend savings portfolios
  • Claims processing: The bot at insurers that analyzes photos and suggests partial payouts
  • Dynamic pricing models: Car insurance based on telematics from the car's black box

Even the latter falls under the AI Act because it directly affects premiums and thus access to services.

Rediscovering the human dimension

"Human in the loop" sounded like tick-the-box at NovaBank for years. An employee clicked approve after the model flashed "green." Under the AI Act, that same employee must be able to explain why customer A gets a credit limit and customer B doesn't, including the role of postal code, device type, or timing.

This requires new skills: recognizing variables, seeing bias possibilities, and knowing when you may override a model.

Fatima starts with a simple experiment. She has the team search through twenty rejected files for similarities. Within an hour, they see patterns that previously went unnoticed – higher rejection rates in one specific region, remarkably low scores for freelancers in the cultural sector. The penny drops: AI literacy isn't a luxury; it's necessary to protect duty of care and reputation.

Team analyzes patterns in rejected credit applications and discovers bias in AI models

Five steps to action – without magic formulas

StepActionResult
1. AI mappingInventory all models that directly decide on loans, premiums, or transactionsOverview of name, purpose, and data sources
2. Data chainCheck origin, representativeness, and recent updates of all data sourcesValidation protocol for new sources
3. Decision rulesExplain why certain variables count (no more black box)Transparent explanation for customers and regulators
4. Override proceduresBuild procedures for manual interventions with loggingFeedback loop for model improvement
5. AI literacyInvest in continuous training for all involved teamsCompetent employees who can assess models

1. Map the AI landscape

Which models directly decide on loans, premiums, or transactions? Put name, purpose, and data sources in one overview.

2. Check the data chain

Origin, representativeness, and recent updates. For every new source: validate again.

3. Expose decision rules

No black box in board presentations. In plain language: why does mobile operating system count? Why does shopping area X get a risk uplift?

4. Build override procedures

Employees log not only that they manually intervened, but also why. That feedback feeds the retrain process.

5. Invest in AI literacy

Basic knowledge for customer advisors, in-depth sessions for risk & compliance. Not as a one-time workshop, but as a continuous learning path2.

Fatima's first results

Three months later, the quick wins are visible. The percentage of "unexplained" rejections drops, complaint handling takes less time, and the marketing department proudly uses the new transparency in campaign material: We explain how our digital assessment works.

Why it doesn't stop at compliance

The CFO sees something else happening: better insight into the models generates sharper questions for suppliers. NovaBank prunes unnecessary features, reduces license costs, and brings more expertise in-house. The risk budget shifts from firefighting to innovation.

CFO and management team discuss the benefits of transparent AI systems and cost savings

Series outlook

This opening blog is the wake-up call. In the upcoming parts, we'll dive into:

  • how real-time fraud detection falls under the AI Act,
  • what fairness means for dynamic insurance premiums,
  • and how asset managers organize human oversight for algorithmic investment strategies.

Always with the goal that Fatima now has clear: responsible AI use as a competitive advantage, not as a burdensome cost center.


Curious about what an AI literacy program looks like for financial teams? We build modularly: from basic sessions for customer advisors to deep dives for model validators. Feel free to send a message to exchange ideas.

🎯 Free EU AI Act Compliance Check

Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.

100% free & anonymous
Instant results
Practical recommendations

Sources

[1]European Commission(2024)Artificial Intelligence Act - Regulation (EU) 2024/1689. Official Journal of the European Union.

Stay up to date with AI developments

Receive weekly practical AI tips and legal updates that you can apply immediately.

Zahed Ashkara

Zahed Ashkara

AI & Legal Expert

Ready to start with AI Literacy?

Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.

View AI Literacy Training