CONTACT ONS

Cursusaanbod

Session 1 — 09:30 to 10:50 · The AI Security Landscape for Developers
  • What's different about AI security: the gap between traditional AppSec and AI-era threats
  • A quick tour of the AI stack: foundation models, fine-tuning, RAG, agents, and where risk sits in each
  • Why government services face unique exposure: data sensitivity, public accountability, and regulatory context (UK AI Principles, DPA 2018, GDPR)
  • The OWASP Top 10 for LLM Applications (2025) at a glance
  • Interactive Slido poll: "What's the single AI feature your team is most worried about securing?"

Break — 10:50 to 11:10

Session 2 — 11:10 to 12:30 · OWASP Top 10 for LLM Applications (2025) — Part 1
  • LLM01: Prompt Injection — direct and indirect attacks, real-world examples
  • LLM02: Sensitive Information Disclosure — what models leak and why
  • LLM03: Supply Chain — model provenance, third-party plugins, Hugging Face risks
  • LLM04: Data and Model Poisoning — training data integrity basics
  • LLM05: Improper Output Handling — why treating LLM output as user input matters
  • Short live demo: Prompt injection against a sample chatbot

Lunch — 12:30 to 13:20

Session 3 — 13:20 to 14:40 · OWASP Top 10 for LLM Applications (2025) — Part 2+ Hands-On Lab
  • LLM06: Excessive Agency — tool use, permissions, and the principle of least privilege
  • LLM07: System Prompt Leakage — what happens when your system prompt escapes
  • LLM08: Vector and Embedding Weaknesses — RAG-specific pitfalls
  • LLM09: Misinformation and Over-reliance — hallucinations as a security issue
  • LLM10: Unbounded Consumption — denial-of-wallet and resource exhaustion
  • Hands-on lab (~30 minutes): Delegates attack and then harden a small LLM-backed service. Each delegate tests at least one prompt-injection pattern and implements one control. Group debrief at the end.

Break — 14:40 to 15:00

Session 4 — 15:00 to 16:30 · Building Secure AI Services for Government + Close
  • Secure-by-design patterns: input validation, output filtering, guardrails
  • Authentication, authorisation, and session management for AI-exposing endpoints
  • Logging, monitoring, and incident response considerations for AI features
  • Data protection touchpoints: DPIA triggers for AI features, UK GDPR Article 22 in practice
  • Putting it together: a lightweight threat-modelling walkthrough for an AI feature
  • Implementation planning: each delegate drafts their three quick wins for the week ahead
  • Q&A, resources, and next steps

Vereisten

Prerequisites
  • Working knowledge of at least one modern programming language (Python, JavaScript/TypeScript, Go, Java, or C#)
  • Basic familiarity with web applications and APIs
  • No prior AI or security background is required — the course is designed as a levelling-up foundation
Audience
  • Software developers and engineers building or integrating AI features
  • Platform engineers and DevOps engineers supporting AI workloads
  • Technical leads and architects responsible for AI-powered services
  • Security champions embedded in engineering teams
  • QA and test engineers testing AI-enabled systems
 7 Uren

Aangepaste bedrijfsopleiding

Opleidingsoplossingen ontworpen exclusief voor bedrijven.

  • Aangepaste inhoud: We passen de syllabus en praktijkopdrachten aan naar de echte doelen en behoeften van uw project.
  • Voor flexibel schema: Datums en tijden aangepast aan het rooster van uw team.
  • Formaat: Online (live), In-company (bij uw kantoren) of Hybride.
Investering

Prijs per privégroep, online live training, startend vanaf 1600 € + BTW*

Neem contact met ons op voor een exacte offerte en om onze laatste promoties te horen

Voorlopige Aankomende Cursussen

Gerelateerde categorieën