WE_N2_PRD_GENAI

Product Management & GenAI: Strategic Fundamentals

The integration of Generative AI is radically transforming the role of the Product Manager, shifting from "Feature Orchestrator" to "Architect of Uncertainty". Unlike traditional deterministic projects, GenAI projects introduce a probabilistic element that must be properly framed. This intensive session equips Product Managers with a critical framework to validate, design, and manage a product integrating Generative AI. It provides an understanding of the ecosystem, mastery of new technological building blocks (LLM, RAG, Agents), and helps avoid common "Hype" pitfalls by focusing on creating real and secure value.

WEnvision
✓ Official training WEnvisionLevel Intermediate⏱️ 0.5 day (3.5h)

What you will learn

  • Distinguish the specificities of a Generative AI project compared to traditional software development (uncertainty management, prior validation).
  • Identify relevant use cases and disqualify "false good ideas" (Anti-patterns) where AI is not necessary.
  • Understand key technical components (LLM, RAG, Fine-tuning, Agents) to communicate effectively with technical teams.
  • Master strategic decision criteria: compliance (GDPR, AI Act), data security, and cost management (FinOps/Tokens).
  • Drive AI product quality and performance through adapted metrics (evaluation, hallucination management).

Prerequisites

  • Significant experience in product management.
  • Basic digital and data literacy.
  • No coding skills required.
  • A computer with internet connection to access materials and collaboration tools.

Target audience

  • Product Managers (Confirmed/Senior), Head of Product, experienced Product Owners, Anyone in charge of product strategy looking to integrate generative AI

Training Program

4 modules to master the fundamentals

Topics covered
  • →The "Reality Check" in Discovery: From determinism (Code) to probabilism (Model). Understanding why technical feasibility must precede need validation (Proof of Value).
  • →Data as a product: The critical importance of "Data Readiness" (Quality, structure, accessibility).
  • →Anti-Patterns & Eligibility Matrix: Knowing when to say "No" (the Calculator, Workflow, and Ground Truth traps). The golden rule: Use AI to create/transform, not to execute strict rules.
  • →The Run: Introduction to qualitative monitoring and drift management.
Topics covered
  • →The model landscape: Choosing the right tool. LLM (Powerful generalists) vs SLM (Fast specialists). Introduction to multimodality (Text, Image, Audio).
  • →RAG (Retrieval Augmented Generation): Connecting AI to enterprise knowledge to limit hallucinations and cite sources.
  • →Fine-Tuning vs Prompting: Distinguishing instruction (Prompt) from education (Fine-tuning). Knowing when to invest in training.
  • →Assistant vs Agent: Understanding the autonomy scale, from the human-validated "Copilot" to the action-executing "Agent".
Topics covered
  • →Regulation and Data (The legal filter): GDPR (Anonymization and least privilege), AI Act (Transparency obligations and high-risk system documentation).
  • →Infrastructure and Security (The technical filter): Decision matrix SaaS Public (Speed/Performance) vs Private Cloud/On-Premise (Sovereignty/Control).
  • →FinOps and Performance (The economic filter): Token economics (estimating and controlling variable costs), managing Latency (impact on user experience and Streaming).
Topics covered
  • →Summary of key points.
  • →Questions and Answers.

Related Trainings

SFEIR Institute

AI Engineer

Move from simple model querying to building complex autonomous systems. The Agentic AI revolution is here, and this intensive 3-day training gives you the keys to build, orchestrate and deploy production-ready generative AI applications. From mastering LLM fundamentals to deploying intelligent autonomous agents, you'll learn to leverage the best Cloud services (Vertex AI, Amazon Bedrock, Azure OpenAI). You'll discover how to design high-performance RAG pipelines, and dive deep into Agentic AI by orchestrating multi-agent systems capable of planning, interacting with tools (Function Calling, MCP) and collaborating through LangChain, LangGraph and Google ADK. Beyond building, this program addresses critical enterprise challenges: model evaluation (G-Eval, DeepEval), security (Guardrails, prompt injection) and scaling strategies to control your production costs.

3 d
Intermediate
SFEIR Institute

Claude Code Training

Boost your productivity with Claude Code, Anthropic's CLI tool for AI-assisted development. After installing and configuring Claude Code on your workstation, you'll learn basic interactions: creating effective CLAUDE.md files, mastering Plan Mode to review refactorings before execution, and generating unit and integration tests. You'll discover how to organize your documentation and manage prompts with modular rules in .claude/rules/. You'll explore sub-agents and skills: creating autonomous agents to parallelize tasks, orchestrating sequential and parallel patterns, and developing reusable skills to automate your workflows. Finally, you'll master essential commands and tips for maximum daily productivity. Hands-on training with 60% labs on real-world scenarios.

1 d
Fundamental

Upcoming sessions

No date suits you?

We regularly organize new sessions. Contact us to find out about upcoming dates or to schedule a session at a date of your choice.

Register for a custom date

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Quiz / MCQ — Quick knowledge check (paper-based or digital via tools like Kahoot/Klaxoon).
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

395€ excl. VAT

per learner