GCP200DEVLLM

Application Development with LLMs on Google Cloud

In this course, you'll dive into the details of using Large Language Models (LLMs) in your applications. You'll start by exploring the core principles that underpin prompting LLMs. Next, you will focus on Google's latest family of models, Gemini. You'll explore the various Gemini models and their multimodal capabilities. This includes a deep dive into effective prompt design and engineering within the Vertex AI Studio environment. Then, the course moves to application development frameworks and how to implement these concepts into your applications.

Google Cloud
✓ Official training Google CloudLevel Intermediate⏱️ 1 day (7h)

What you will learn

  • Explore the different options available for using generative AI on Google Cloud.
  • Use Vertex AI Studio to test prompts for large language models.
  • Develop LLM-powered applications using generative AI.
  • Apply advanced prompt engineering techniques to improve the output from LLMs.
  • Build a multi-turn chat application using the Gemini API and LangChain.

Prerequisites

  • Completion of "Introduction to Developer Efficiency on Google Cloud" or equivalent knowledge.

Target audience

  • Customers, Partners

Training Program

5 modules to master the fundamentals

Topics covered
  • →What is generative AI
  • →Vertex AI on Google Cloud
  • →Generative AI options on Google Cloud
  • →Introduction to course use case
Topics covered
  • →Introduction to Vertex AI Studio
  • →Designing and testing prompts
  • →Data governance in Vertex AI Studio
Activities

Lab: Getting Started with the Vertex AI Studio User Interface

Topics covered
  • →Introduction to grounding
  • →Integrating the Vertex AI Gemini APIs
  • →Chat, memory and grounding
  • →Search principles
Activities

Lab: Getting Started with LangChain + Vertex AI Gemini API

Topics covered
  • →Review of few-shot prompting
  • →Chain-of-thought prompting and thinking budgets
  • →Meta prompting, multi-step, and panel prompts
  • →RAG and ReAct
Activities

Lab: Advanced Prompt Architectures

Topics covered
  • →LangChain for chatbots
  • →ADK for chatbots
  • →Chat retrieval
Activities

Lab: Implementing RAG Using LangChain

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Guided Labs — Guided practical exercises on software, hardware, or technical environments.
  • Quiz / MCQ — Quick knowledge check (paper-based or digital via tools like Kahoot/Klaxoon).
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

Upcoming sessions

January 14, 2026
Distanciel • Français
Register
April 10, 2026
Distanciel • Français
Register
June 12, 2026
Distanciel • Français
Register
September 11, 2026
Distanciel • Français
Register
December 5, 2026
Distanciel • Français
Register

790€ excl. VAT

per learner