GCP300ADKAE

Deploy multi-agent systems with Agent Development Kit and Agent Engine

In this course, you'll learn to use the Google Agent Development Kit to build complex, multi-agent systems. You will build agents equipped with tools, and connect them with parent-child relationships and flows to define how they interact. You'll run your agents locally and deploy them to Vertex AI Agent Engine to run as a managed agentic flow, with infrastructure decisions and resource scaling handled by Agent Engine.

Google Cloud
✓ Official training Google CloudLevel Advanced⏱️ 1 day (7h)

What you will learn

  • Build an agent with tools using the Google Agent Development Kit.
  • Establish interaction patterns between multiple agents with parent-child relationships and flows.
  • Utilize features such as session memory, artifact storage, and callbacks.
  • Deploy a multi-agent app to Agent Engine.
  • Query an agent app running on Agent Engine.
  • Evaluate agents within the Agent Development Kit.

Prerequisites

  • Python
  • gen AI prompt engineering
  • gen AI tool use

Target audience

  • Machine learning engineers, Gen AI engineers

Training Program

5 modules to master the fundamentals

Objectives

  • Explain how the Agent Development Kit compares to other tools such as the Google Gen AI SDK or LangChain.
  • Describe the parameters used to build an agent in Agent Development Kit.

Topics covered

  • →Basics of building an agent in the Agent Development Kit.

Objectives

  • Discuss the importance of structured docstrings and typing when writing tool functions for agents.
  • Demonstrate the ability to provide tools to an agent.
  • List common and useful tools available for the Agent Development Kit agents, including LangChain tools.

Topics covered

  • →Enhance agents with tools and cover the growing breadth of available tools.

Activities

Lab: Get started with Agent Development Kit (ADK)

Lab: Empower ADK agents with tools

Objectives

  • Describe the directory structure and naming conventions encouraged by the Agent Development Kit.
  • Demonstrate the ability to create multiple agents and relate them to one another with parent-child relationships.
  • Describe the different flow options and when you might use them.
  • Get responses that have passed through multiple agents.
  • Control content at different points with callbacks.

Topics covered

  • →Manage communication and task-sharing between agents through parent-child relationships and flows to enable coordinated responses to queries.

Activities

Lab: Build multi-agent systems with ADK

Objectives

  • Describe the benefits of deploying agents, especially multi-agent systems, to Agent Engine over self-hosting, such as in Vertex AI online predictions.
  • Demonstrate deploying to Agent Engine.
  • Demonstrate querying a deployed agent app.

Topics covered

  • →Deploying agent apps to Agent Engine and querying responses.

Activities

Lab: Deploy ADK agents to Agent Engine

Objectives

  • Evaluate agents within the Agent Development Kit.
  • Use the web interface to view evaluations.

Topics covered

  • →Evaluate agents within the Agent Development Kit.

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Guided Labs — Guided practical exercises on software, hardware, or technical environments.
  • Quiz / MCQ — Quick knowledge check (paper-based or digital via tools like Kahoot/Klaxoon).
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

Upcoming sessions

April 17, 2026
Distanciel • Français
Register
July 10, 2026
Distanciel • Français
Register
October 8, 2026
Distanciel • Français
Register

700€ excl. VAT

per learner