AIENGINEER

AI Engineer Training

Move from simple model querying to building complex autonomous systems. The Agentic AI revolution is here, and this intensive 3-day training gives you the keys to build, orchestrate and deploy production-ready generative AI applications.

From mastering LLM fundamentals to deploying intelligent autonomous agents, you'll learn to leverage the best Cloud services (Vertex AI, Amazon Bedrock, Azure OpenAI). You'll discover how to design high-performance RAG pipelines, and dive deep into Agentic AI by orchestrating multi-agent systems capable of planning, interacting with tools (Function Calling, MCP) and collaborating through LangChain, LangGraph and Google ADK.

Beyond building, this program addresses critical enterprise challenges: model evaluation (G-Eval, DeepEval), security (Guardrails, prompt injection) and scaling strategies to control your production costs.

✓ Official training SFEIR InstituteLevel Intermediate⏱️ 3 days (21h)

What you will learn

  • Select and master LLMs: understand architectures, fine-tune parameters (system prompt, context window, temperature, context caching) and choose the right model for each use case
  • Master Agentic AI and Orchestration: model querying technologies (API streaming, SDK), agent orchestration with LangChain, LangGraph, Google ADK, and understand agentic patterns (ReAct, plan-and-execute, multi-agents)
  • Create Interoperable Tools: develop and integrate tools via Function Calling, design and deploy an MCP server (Model Context Protocol)
  • Develop complete RAG Pipelines: manage embeddings, master vector databases, optimize chunking and implement complex retrieval strategies
  • Evaluate: implement automated evaluation protocols (LLM-as-a-judge, DeepEval, Pytest) to rigorously measure reliability, relevance and performance of generative AI applications
  • Industrialize and Secure (LLMOps): evaluate performance via OpenTelemetry, secure applications (Guardrails) and deploy to production while optimizing costs

Prerequisites

  • Python: proficiency in the language (functions, classes, async/await)
  • REST APIs: understanding of HTTP calls, JSON, authentication
  • Git: basic usage (clone, commit, push)
  • Cloud: basic knowledge (GCP, AWS or Azure account)
  • No prior generative AI knowledge is required - Module 1 covers all fundamentals
  • A laptop (Windows, macOS or Linux) with minimum 6 GB RAM
  • Python 3.10+ installed with pip
  • An IDE (VS Code recommended) with Python extension

Target audience

  • Fullstack Developers, Software Engineers, Software Architects, Data Engineers, Tech Leads, MLOps and Data Scientists who want to master building applications with generative AI and Agentic AI

Training Program

15 modules to master the fundamentals

Topics covered
  • →What is generative AI? Traditional AI vs generative AI
  • →How an LLM works: the pipeline from prompt to response
  • →Transformer Architecture: attention mechanism, encoder vs decoder
  • →Tokenization & embeddings: BPE, vector spaces, impact on costs
  • →2026 model landscape: Claude 4.6, Gemini 3.1, GPT-5.x, Llama 4, Mistral 3
  • →Generation parameters: temperature, context window, context caching, system prompt
  • →Limitations and hallucinations: types, causes, mitigation strategies
Topics covered
  • →Vertex AI: Model Garden, Gemini API, managed endpoints, fine-tuning, pricing
  • →Amazon Bedrock: unified API, multi-vendor models, Guardrails, Provisioned Throughput
  • →Azure OpenAI Service: deployment, content filtering, TPM quotas, Azure ecosystem integration
  • →Provider comparison: latency, costs, GDPR compliance, decision tree
  • →Multi-provider strategy: abstraction, failover, load balancing, LiteLLM
  • →Self-hosting & open-source: vLLM, Ollama, TGI, GPU costs
Topics covered
  • →Advanced patterns: ReAct, Tree-of-Thought, Self-Consistency
  • →Structured outputs: JSON mode, Pydantic, constrained grammars
  • →System prompts: roles, personas, complex instructions
  • →Prompt templating & versioning
Activities

Advanced Prompt Engineering Lab: Chain-of-Thought, Few-Shot Learning, ReAct Pattern, Structured Outputs with Pydantic

Topics covered
  • →Unified function calling: OpenAI, Anthropic, Gemini - common patterns
  • →Tool design: schemas, validation, error handling, idempotence
Topics covered
  • →Common patterns: streaming, error handling, retry, rate limiting
  • →Cost optimization: caching, batching, contextual model selection
  • →Multi-model routing: complexity-based routing strategies
Activities

LLM APIs & SDKs Lab: Chat completion + streaming, Cross-provider function calling

Topics covered
  • →Embeddings: models (OpenAI, Cohere, BGE), vector spaces
  • →Vector databases: Pinecone, ChromaDB, Weaviate, Vertex AI Vector Search
  • →Chunking strategies: fixed, semantic, recursive, document-aware
  • →Advanced retrieval: hybrid search, re-ranking, query expansion
  • →RAG patterns: naive, parent-child, corrective RAG
Topics covered
  • →LangChain Architecture: LCEL (LangChain Expression Language), Runnables
  • →Key components: ChatModels, Prompts, OutputParsers, Retrievers
  • →Advanced chains: sequential, parallel, branching, fallbacks
  • →Vector store integration: FAISS, Chroma, Pinecone
  • →Memory & state management: ConversationBuffer, migration to LangGraph
Activities

Complete RAG Lab: Indexation & Chunking, Advanced retrieval (similarity, MMR, hybrid search, reranking), RAG LCEL Chain

Topics covered
  • →Why LangGraph? Limitations of linear chains, need for graphs
  • →Fundamental concepts: StateGraph, nodes, edges, conditional edges
  • →State management: typed state, reducers, checkpointing
  • →Agentic patterns: ReAct agent, plan-and-execute, reflection
  • →Human-in-the-loop: interrupts, approval workflows, breakpoints
  • →Subgraphs & composition: modularity, nested agents
  • →Persistence & streaming: checkpointers, event streaming
Activities

LangGraph Agent Lab: Multi-step agent with state, tools and conditional routing (5 nodes), Human-in-the-Loop

Topics covered
  • →MCP Architecture: client/server, transports (stdio, Streamable HTTP)
  • →The 3 primitives: Tools, Resources, Prompts
  • →Building an MCP server in Python (FastMCP)
  • →Building an MCP server in TypeScript
  • →MCP Ecosystem: community servers, enterprise integrations
  • →Difference between a tool and an MCP server: when to use each
Activities

MCP Server Lab: 2 MCP tools (calculate, search_knowledge), 2 resources (config, file)

Topics covered
  • →ADK Introduction: Google philosophy, positioning vs LangGraph
  • →ADK Architecture: Agent, Tool, Session, Runner
  • →Agent types: LlmAgent, SequentialAgent, LoopAgent, ParallelAgent
  • →Tool ecosystem: function tools, built-in tools, MCP tools
  • →Multi-agent orchestration: sub_agents, AgentTool, delegation, hierarchy
  • →Multi-agent architectures: supervisor, hierarchical, consensus, swarm
  • →ADK vs LangGraph vs CrewAI comparison: selection criteria
Activities

ADK Multi-agents Lab: 3 function tools, 3 specialized agents, sequential + parallel + coordinator orchestration

Topics covered
  • →Why evaluate: non-determinism, evaluation pyramid
  • →Automated benchmarks: BLEU, ROUGE, MMLU, GSM8K, HumanEval
  • →LLM-as-judge: custom criteria, bias, calibration
  • →Quality metrics: accuracy, faithfulness, hallucination
  • →Performance metrics: TTFT, P95/P99 latency, throughput
  • →G-Eval: LLM evaluation with auto-generated criteria
  • →DeepEval: Faithfulness, Answer Relevancy, Hallucination metrics
  • →Pytest integration: assert_test with DeepEval, golden datasets
Activities

Evaluation & Tests Lab: Custom relevance and factuality metrics (LLM-as-judge), configurable criteria, batch evaluation, regression tests with golden datasets and version comparison

Topics covered
  • →LLM Tracing: making every step observable
  • →LangSmith: native LangChain tracing, @traceable decorator
  • →Langfuse: open source alternative (MIT), free self-hosting
  • →Phoenix (Arize): OpenTelemetry-native, zero vendor lock-in
  • →Regression tests with pytest and DeepEval
  • →CI/CD pipeline with GitHub Actions: automated quality gates
  • →Agent debugging: common problems, loops, wrong tool selection
  • →LangGraph Studio: visualization, debug mode, fork & edit
Topics covered
  • →LLM security threats: prompt injection, exfiltration, jailbreak
  • →Multi-layer defense: input validation, prompt hardening, output validation
  • →NeMo Guardrails (NVIDIA): Colang, topic control, jailbreak detection
  • →Guardrails AI: composable validators, Hub, PII detection
  • →Content filtering in production: multi-layer pipeline
  • →Rights propagation and access security
Topics covered
  • →The 4 layers: entry, application, inference, observability
  • →API Gateway: authentication, rate limiting, routing
  • →Semantic cache: embeddings, Redis, production gains
  • →Google Cloud Run: containerization, auto-scaling, SSE streaming
  • →AWS Lambda: serverless functions, constraints and cold start
  • →Vertex AI Endpoints: GPU/TPU model deployment
  • →AWS Bedrock AgentCore: agent runtime, framework-agnostic
Topics covered
  • →LLM-specific scaling challenges
  • →Auto-scaling: horizontal, vertical, predictive
  • →LLM cost structure: tokens, infrastructure, observability
  • →Model Router: dynamic complexity-based routing (30-85% savings)
  • →Optimization techniques: semantic cache, prompt optimization, batch API
  • →LangGraph Platform: infrastructure for stateful agents
  • →Deployment options: Cloud, Hybrid, Self-Hosted
  • →Production checklist: the 6 pillars
Activities

Production & Deployment Lab: API Gateway (rate limiting, authentication, FastAPI), Semantic cache (embeddings, cosine similarity), Monitoring (metrics, costs, alerting), Guardrails (anti-prompt-injection, PII detection)

Related Trainings

SFEIR Institute
Formation du moment

AI-Augmented Developer

Accelerate the productivity of your development teams with Agentic Coding. In a market where development speed and code quality make the difference, this training transforms your developers into "augmented developers," capable of leveraging the most advanced AI agents. Focused on real-world use cases, the training emphasizes immediate value creation and concrete improvement in developer productivity. Participants leave with methods, workflows, and assets directly applicable to their corporate projects.

2 d
Intermediate
SFEIR Institute
Nouveau

AI-Augmented Developer – Advanced

The 'AI-Augmented Developer, Advanced' course is the follow-up to the 'AI-Augmented Developer' course, intended for those who wish to deepen their collaboration with artificial intelligence agents and the automation of complex processes. Participants learn to orchestrate several specialized agents capable of coordinating, delegating tasks, and continuously improving code quality. They also discover how to transform functional specifications into automated implementations through a specs-oriented approach ensuring consistency and reliability. The training addresses the creation of MCP servers from conception to production, covering configuration and security aspects. This program marks a key step towards complete expertise in augmented development, where humans and artificial intelligence collaborate fluidly and efficiently to create the software of tomorrow.

1 d
Intermediate
SFEIR Institute

Claude Code Training

Boost your productivity with Claude Code, Anthropic's CLI tool for AI-assisted development. After installing and configuring Claude Code on your workstation, you'll learn basic interactions: creating effective CLAUDE.md files, mastering Plan Mode to review refactorings before execution, and generating unit and integration tests. You'll discover how to organize your documentation and manage prompts with modular rules in .claude/rules/. You'll explore sub-agents and skills: creating autonomous agents to parallelize tasks, orchestrating sequential and parallel patterns, and developing reusable skills to automate your workflows. Finally, you'll master essential commands and tips for maximum daily productivity. Hands-on training with 60% labs on real-world scenarios.

1 d
Fundamental

Upcoming sessions

No date suits you?

We regularly organize new sessions. Contact us to find out about upcoming dates or to schedule a session at a date of your choice.

Register for a custom date

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Guided Labs — Guided practical exercises on software, hardware, or technical environments.
  • Case Study — Analysis of a real or fictional business scenario to derive solutions.
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

Frequently Asked Questions

You must be proficient in Python (functions, classes, async/await), understand REST APIs (HTTP, JSON, authentication) and have basic Cloud knowledge (GCP, AWS or Azure). No prior generative AI knowledge is required - Module 1 covers all fundamentals.
This training is designed for Fullstack Developers, Software Engineers, Software Architects, Data Engineers, Tech Leads, MLOps and Data Scientists who want to master building applications with generative AI and Agentic AI.
The training covers LangChain (LCEL, Runnables, Chains), LangGraph (StateGraph, agents, workflows), Google ADK (multi-agents), MCP (Model Context Protocol), as well as Cloud providers (Vertex AI, Amazon Bedrock, Azure OpenAI). You'll also learn DeepEval, Guardrails and observability tools (LangSmith, Langfuse, Phoenix).
In addition to the training materials, you'll leave with code from several hands-on labs: a complete RAG assistant, a multi-step LangGraph agent, an MCP server, and an ADK multi-agent system. A Git repository containing all workshop code will be provided.
Yes, Day 3 is entirely dedicated to production: evaluation and metrics (DeepEval, LLM-as-judge), tracing and debugging (LangSmith, Langfuse), security (Guardrails, prompt injection), Cloud architecture (Cloud Run, Lambda) and cost optimization (semantic cache, model routing).
Our training organizations SFEIR SAS and SFEIR-Est are Qualiopi certified for training activities, which allows you to request funding from your OPCO (in France). Funding acceptance remains at your OPCO's discretion. Contact us for a quote.

2,370€ excl. VAT

per learner