Privacy & Security by Design for AI Teams

Practical training for IT, Security and Data teams. Learn technical security of AI systems, data protection in the AI lifecycle, conducting Privacy Impact Assessments and implementing secure AI development practices.

2 half-days
Max. 10 participants

What You Will Learn

AI Security Fundamentals

  • Identify and mitigate AI-specific security risks
  • Recognize and prevent prompt injection and jailbreaking
  • Data leakage prevention in AI systems
  • Perform security testing of AI models

Privacy by Design

  • Conduct Privacy Impact Assessments (PIA) for AI
  • Apply data minimization and purpose limitation
  • Implement privacy-enhancing technologies
  • Design GDPR-compliant AI systems

Secure Development

  • Implement security frameworks for AI
  • Vendor evaluation and third-party risk assessment
  • Secure model deployment and monitoring
  • Incident response procedures for AI systems

What It Gives You

Technical
Security Expertise
In-depth knowledge of AI security
GDPR & NIS2
Compliance Ready
Comply with legislation and regulations
Hands-on
Practical Skills
Directly applicable in your work

What You Can Expect

AI security fundamentals: prompt injection, jailbreaking, data leakage
Privacy by Design principles in AI systems
Data protection in practice: minimization, purpose limitation, logging
Secure AI development: model security, vendor evaluation
Conducting Privacy Impact Assessments (PIA)
Implementing security frameworks for AI systems
Technical security measures: encryption, access control
GDPR and NIS2 compliance for AI
Hands-on workshops with practical cases
Best practices and common pitfalls
Certificate of participation

Preview of the Training

View some slides from the training to get an impression of what you can expect.

What will we cover?

In this practical training for IT, Security and Data teams we learn:

  • AI security fundamentals: prompt injection, jailbreaking, data leakage
  • Implementing Privacy by Design principles in AI systems
  • Data protection in the AI lifecycle: minimization, purpose limitation
  • Conducting Privacy Impact Assessments (PIA) for AI projects
  • Secure AI development: model security, vendor evaluation
  • Implementing security frameworks and technical measures
  • Ensuring GDPR and NIS2 compliance for AI systems

Your Trainer

Zahed Ashkara - AI Trainer

Zahed Ashkara

AI Enablement Specialist & Trainer

As the founder of Embed AI, Zahed combines his expertise in AI implementation with a passion for organizational transformation. He helps organizations strategically and responsibly integrate AI through a holistic enablement approach.

With his background in strategy, governance, and practical AI implementation, he guides organizations in building AI competencies, creating the right culture, and implementing governance structures that enable innovation.

Zahed's unique combination of strategic insight, technical AI knowledge, and practical implementation experience makes him the ideal guide for your AI transformation. His pragmatic approach ensures you not only gain knowledge, but actually integrate it into your organization for lasting success.

Investment

On request,-excl. VAT per group

Included:

  • Lunch & refreshments
  • Extensive documentation
  • Digital workbook
  • Certificate of participation
  • Access to online learning environment

Available Dates

May

No longer available

June

No longer available

July

July 4, 2025
July 11, 2025
July 18, 2025
July 25, 2025

August

August 1, 2025
August 8, 2025
August 15, 2025
August 22, 2025
August 29, 2025

What Participants Say

mr. Amo Edmond Amoh
Lawyer
This training provided an in-depth and practice-oriented look at privacy and security aspects of AI implementation in the legal field. We discussed privacy by design principles, GDPR compliance for AI tools, and how to work safely with sensitive data. The focus on data protection and secure AI development helped me understand how we can deploy AI responsibly without privacy risks. A valuable training for anyone working with confidential information.