Generative AI in Production
In this course, you learn about the different challenges that arise when productionizing generative AI-powered applications versus traditional ML. You will learn how to manage experimentation and tuning of your LLMs, then you will discuss how to deploy, test, and maintain your LLM-powered applications. Finally, you will discuss best practices for logging and monitoring your LLM-powered applications in production.

What you will learn
- Describe the challenges in productionizing applications using generative AI.
- Manage experimentation and evaluation for LLM-powered applications.
- Productionize LLM-powered applications.
- Implement logging and monitoring for LLM-powered applications.
Prerequisites
- Completion of "Introduction to Developer Efficiency on Google Cloud" or equivalent knowledge.
Target audience
- Developers and machine learning engineers who wish to operationalize Gen AI-based applications
Training Program
4 modules to master the fundamentals
Objectives
- Understand generative AI operations
- Compare traditional MLOps and GenAIOps
- Analyze the components of an LLM system
Topics covered
- →AI System Demo: Coffee on Wheels
- →Traditional MLOps vs. GenAIOps
- →Generative AI Operations
- →Components of an LLM System
Objectives
- Experiment with datasets and prompt engineering.
- Utilize RAG and ReACT architecture.
- Evaluate LLM models.
- Track experiments.
Topics covered
- →Datasets and Prompt Engineering
- →RAG and ReACT Architecture
- →LLM Model Evaluation (metrics and framework)
- →Tracking Experiments
Activities
Lab: Unit Testing Generative AI Applications
Optional Lab: Generative AI with Vertex AI: Prompt Design
Objectives
- Deploy, package, and version models
- Test LLM systems
- Maintain and update LLM models
- Manage prompt security and migration
Topics covered
- →Deployment, packaging, and versioning (GenAIOps)
- →Testing LLM systems (unit and integration)
- →Maintenance and updates (operations)
- →Prompt security and migration
Activities
Lab: Vertex AI Pipelines: Qwik Start
Lab: Safeguarding with Vertex AI Gemini API
Objectives
- Utilize Cloud Logging
- Version, evaluate, and generalize prompts
- Monitor for evaluation-serving skew
- Utilize continuous validation
Topics covered
- →Cloud Logging
- →Prompt versioning, evaluation, and generalization
- →Monitoring for evaluation-serving skew
- →Continuous validation
Activities
Lab: Vertex AI: Gemini Evaluations Playbook
Optional Lab: Supervised Fine Tuning with Gemini for Question and Answering
Quality Process
SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach
- Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
- Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
- Guided Labs — Guided practical exercises on software, hardware, or technical environments.
- Quiz / MCQ — Quick knowledge check (paper-based or digital via tools like Kahoot/Klaxoon).
The achievement of training objectives is evaluated at multiple levels to ensure quality:
- Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
- Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
- Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.
Train multiple employees
- Volume discounts (multiple seats)
- Private or custom session
- On-site or remote