GCP200VERTEXAISEC

Vertex AI and Generative AI Security

This course is designed to empower your organization to fully harness the transformative potential of Google's Vertex AI and generative AI (gen AI) technologies, with a strong emphasis on security. Tailored for AI practitioners and security engineers, it provides targeted knowledge and hands-on skills to navigate and adopt AI safely and effectively. Participants will gain practical insights and develop a security-conscious approach, ensuring a secure and responsible integration of gen AI within their organization.

Google Cloud
✓ Official training Google CloudLevel Intermediate⏱️ 2 days (14h)

What you will learn

  • Establish foundational knowledge of Vertex AI and its security challenges.
  • Implement identity and access control measures to restrict access to Vertex AI resources.
  • Configure encryption strategies and protect sensitive information.
  • Enable logging, monitoring, and alerting for real-time security oversight of Vertex AI operations.
  • Identify and mitigate unique security threats associated with generative AI.
  • Apply testing techniques to validate and secure generative AI model responses.
  • Implement best practices for securing data sources and responses within Retrieval-Augmented Generation (RAG) systems.
  • Establish foundational knowledge of AI Safety.

Prerequisites

  • Fundamental knowledge of machine learning, in particular generative AI, and basic understanding of security on Google Cloud.

Target audience

  • AI practitioners, security professionals, cloud architects

Training Program

8 modules to master the fundamentals

Objectives

  • Review Google Cloud Security fundamentals.
  • Establish a foundational understanding of Vertex AI.
  • Enumerate the security concerns related to Vertex AI features and components.

Topics covered

  • →Google Cloud Security
  • →Vertex AI components
  • →Vertex AI Security concerns

Activities

Lab: Vertex AI: Training and Serving a Custom Model

Objectives

  • Control access with Identity Access Management.
  • Simplify permission using organization hierarchies and policies.
  • Use service accounts for least privileged access.

Topics covered

  • →Overview of IAM in Google Cloud

Activities

Lab: Service Accounts and Roles: Fundamentals

Objectives

  • Configure encryption at rest and in-transit.
  • Encrypt data using customer-managed encryption keys.
  • Protect sensitive data using the Data Loss Prevention service.
  • Prevent exfiltration of data using VPC Service Controls.
  • Architect systems with disaster recovery in mind.

Topics covered

  • →Data encryption
  • →Protecting Sensitive Data
  • →VPC Service Controls
  • →Disaster recovery planning

Activities

Lab: Getting Started with Cloud KMS

Lab: Creating a De-identified Copy of Data in Cloud Storage

Objectives

  • Deploy ML models using model endpoints.
  • Secure model endpoints.

Topics covered

  • →Network security
  • →Securing model endpoints

Activities

Lab: Configuring Private Google Access and Cloud NAT

Objectives

  • Write to and analyze logs.
  • Set up monitoring and alerting.

Topics covered

  • →Logging
  • →Monitoring

Objectives

  • Identify security risks specific to LLMs and gen AI applications.
  • Understand methods for mitigating prompt hacking and injection attacks.
  • Explore the fundamentals of securing generative AI models and applications.
  • Introduce fundamentals of AI Safety.

Topics covered

  • →Overview of gen AI security risks
  • →Overview of AI Safety
  • →Prompt security
  • →LLM safeguards

Activities

Lab: Safeguarding with Vertex AI Gemini API

Lab: Gen AI & LLM Security for Developers

Objectives

  • Implement best practices for testing model responses.
  • Apply techniques for improving response security in gen AI applications.

Topics covered

  • →Testing generative AI model responses.
  • →Evaluating model responses.
  • →Fine-tuning LLMs.

Activities

Lab: Measure Gen AI Performance with the Generative AI Evaluation Service

Lab: Unit Testing Generative AI Applications

Objectives

  • Understand RAG architecture and security implications.
  • Implement best practices for grounding and securing data sources in RAG systems.

Topics covered

  • →Fundamentals of Retrieval-Augmented Generation
  • →Security in RAG systems

Activities

Lab: Multimodal Retrieval Augmented Generation (RAG) Using the Vertex AI Gemini API

Lab: Introduction to Function Calling with Gemini

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Guided Labs — Guided practical exercises on software, hardware, or technical environments.
  • Quiz / MCQ — Quick knowledge check (paper-based or digital via tools like Kahoot/Klaxoon).
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

Upcoming sessions

March 10, 2026
Distanciel • Français
Register
June 25, 2026
Distanciel • Français
Register
September 17, 2026
Distanciel • Français
Register
December 17, 2026
Distanciel • Français
Register

1,400€ excl. VAT

per learner