GCP100MODELARMOR

Model Armor: Securing AI Deployments

This course explains how to use Model Armor to protect AI applications, specifically large language models (LLMs). The curriculum covers Model Armor's architecture and its role in mitigating threats like malicious URLs, prompt injection, jailbreaking, sensitive data leaks, and improper output handling. Practical skills include defining floor settings, configuring templates, and enabling various detection types. You'll also explore sample audit logs to find details about flagged violations.

Google Cloud
✓ Official training Google CloudLevel Fundamentals⏱️ 1 day (7h)

What you will learn

  • Explain the purpose of Model Armor in a company's security portfolio.
  • Define the protections that Model Armor applies to all interactions with the LLM.
  • Set up the Model Armor API and find flagged violations.
  • Identify how Model Armor manages prompts and responses.

Prerequisites

  • Working knowledge of APIs
  • Working knowledge of Google Cloud CLI
  • Working knowledge of cloud security foundational principles
  • Familiarity with the Google Cloud console

Target audience

  • Security engineers, AI/ML developers, cloud architects

Training Program

6 modules to master the fundamentals

Objectives

  • Recall the course learning objectives.

Topics covered

  • →What's in it for me?

Objectives

  • Explain the purpose of Model Armor in a company's security portfolio.
  • Identify the subset of top 10 OWASP LLM vulnerabilities that Model Armor addresses.
  • Identify Model Armor key concepts and architecture.
  • Map Model Armor features to the security risks they mitigate.

Topics covered

  • →About Model Armor
  • →LLM security risks

Activities

Knowledge check

Quiz

Objectives

  • Define the protections that Model Armor applies to all interactions with the LLM.
  • Describe floor settings and explain how they work.
  • Explain the purpose of a template and how it works with the API.
  • Configure the four types of detections in the template.

Topics covered

  • →About customization
  • →Floor settings
  • →Guard rails and confidence levels
  • →Templates

Activities

Knowledge check

Quiz

Objectives

  • Set up the Model Armor API and find flagged violations.
  • Explain the prerequisites that are required to work with the API.
  • Describe how to enable the API.
  • Set up logging in the template, explore types of audit logs, and find them in SCC.
  • Explain how to find floor setting violations in SCC and resolve them.

Topics covered

  • →About setup
  • →API setup
  • →Flagged violations

Activities

Quiz

Objectives

  • Identify how Model Armor intercepts and manages prompts and responses.
  • Explain how Model Armor reviews prompts and reports findings based on content safety flags.
  • Explain how Model Armor reviews LLM responses and updates them according to template settings.
  • Execute various commands for sanitizing user prompts against different security features.

Topics covered

  • →Prompts and responses
  • →Application code

Activities

Quiz

Objectives

  • Summarize the course learning objectives.

Topics covered

  • →What did I learn?

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Guided Labs — Guided practical exercises on software, hardware, or technical environments.
  • Quiz / MCQ — Quick knowledge check (paper-based or digital via tools like Kahoot/Klaxoon).
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

Upcoming sessions

No date suits you?

We regularly organize new sessions. Contact us to find out about upcoming dates or to schedule a session at a date of your choice.

Register for a custom date

700€ excl. VAT

per learner