DBT

dbt Training - 2 Days to Learn Data Transformation

Learn to transform your data with dbt, the leading tool in the Modern Data Stack.

You'll start by understanding the evolution of data architectures and the difference between ETL and ELT. You'll install dbt, create your first project and connect it to your data sources.

Then you'll learn to build structured data models, choose the right materialization options (table, view, incremental) and organize your metadata with tags. You'll discover how to reference your sources and manage dependencies between models.

You'll explore advanced features: seeds to initialize reference data, snapshots to track history and manage slowly changing dimensions, Jinja macros and variables to automate your transformations.

Finally, you'll implement automated tests to ensure data quality, document your models with lineage, and discover packages from the dbt community.

Hands-on training with 60% labs.

✓ Official training SFEIR InstituteLevel Fundamentals⏱️ 2 days (14h)

What you will learn

  • Understand the key concepts, advantages, and architecture of dbt as a data transformation and modeling tool.
  • Create structured data models with dbt, and perform transformations to process and prepare data for analysis.
  • Master advanced features such as macros, Jinja models, variables, and flow control.
  • Use dbt snapshots to track changes over time and manage historical data, facilitating the analysis of historical trends and slowly changing dimensions.
  • Implement tests to ensure data quality and integrity, allowing for the validation of transformation results and detection of anomalies.

Prerequisites

  • Practical SQL knowledge equivalent to the SQL fundamentals course

Target audience

  • Data Analysts, Data Engineers, Anyone interested in data transformation

Training Program

13 modules to master the fundamentals

Topics covered
  • →Evolution of the data stack
  • →Understanding the differences between Extract-Transform-Load (ETL) and Extract-Load-Transform (ELT) data integration approaches
  • →Introduction to the modern data stack
Topics covered
  • →Overview of dbt
  • →Installing dbt and configuring the development environment
  • →Creating a dbt project
  • →Connecting to data sources
Activities

Setting up a dbt project

Topics covered
  • →Understanding dbt models
  • →How do dbt models work?
  • →Materialization options
  • →Configuring materialization
  • →Introduction to the tagging feature for metadata organization
Activities

Creating data models with dbt

Topics covered
  • →Introduction to dbt sources
  • →Configuring dbt sources
  • →Working with dbt references
Activities

Configuring dbt sources, referencing external data, and managing model dependencies

Topics covered
  • →Introduction to dbt seeds
  • →Creating and populating seed data
  • →Advantages of using seeds for data initialization
  • →Integrating seeds into your dbt models
Activities

Creating and integrating seeds into your dbt projects

Topics covered
  • →Understanding snapshots in dbt
  • →Configuring and defining snapshots
  • →Running and managing snapshots
Activities

Implementing a snapshot strategy

Topics covered
  • →Understanding macros
  • →Jinja, a templating language
  • →Using variables to manage data pipeline configuration
Activities

Advanced data transformation and control

Topics covered
  • →Introduction to dbt packages
  • →Exploring the dbt hub
  • →Installing and using a dbt package
Activities

Exploring dbt packages

Topics covered
  • →Highlighting potential risks in the code
  • →Setting up automated tests
  • →Choosing the appropriate test
  • →Implementing data tests
Activities

Implementing data tests

Topics covered
  • →Documenting data models
  • →Using dbt's built-in documentation features to generate and maintain accessible and up-to-date model documentation
  • →The importance of lineage
Activities

Documenting DBT models

Topics covered
  • →Performing data analysis
  • →Running custom code before and after dbt execution
  • →Creating shareable and accessible data assets
Activities

Creating an exposure

Topics covered
  • →Understanding manifest.json
  • →Introduction to run_result.json
Topics covered
  • →Resources on dbt best practices
  • →About the DBT certification exam

Related Trainings

SFEIR Institute
Best

SQL Fundamentals

Master SQL fundamentals in 2 days. You'll start by understanding the history of data management systems and the role of relational databases today. Then you'll learn to manipulate your data: insert, update, delete, and most importantly query with SELECT. Progressively, you'll master WHERE, ORDER BY, LIMIT clauses, then aggregations with GROUP BY and HAVING. You'll discover the power of joins to combine multiple tables, subqueries and Common Table Expressions (CTEs) to structure complex queries. Finally, you'll cover database design: entity-relationship modeling, normalization, primary and foreign keys, indexes and views. Hands-on training with 60% labs on real-world scenarios.

2 d
Fundamental
Google Cloud

Data Warehousing with BigQuery: Storage Design, Query Optimization, and Administration

In this course, you learn about the internals of BigQuery and best practices for designing, optimizing, and administering your data warehouse. Through a combination of lectures, demos, and labs, you learn about BigQuery architecture and how to design optimal storage and schemas for data ingestion and changes. Next, you learn techniques to improve read performance, optimize queries, manage workloads, and use logging and monitoring tools. You also learn about the different pricing models. Finally, you learn various methods to secure data, automate workloads, and build machine learning models with BigQuery ML.

3 d
Intermediate

Upcoming sessions

March 12, 2026
Distanciel • Français
âś“ Guaranteed session
Register
April 23, 2026
Distanciel • Français
Register
June 25, 2026
Distanciel • Français
Register
August 27, 2026
Distanciel • Français
Register
October 22, 2026
Distanciel • Français
Register
December 17, 2026
Distanciel • Français
Register

Quality Process

SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach

Teaching Methods Used
  • Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
  • Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
  • Guided Labs — Guided practical exercises on software, hardware, or technical environments.
Evaluation and Monitoring System

The achievement of training objectives is evaluated at multiple levels to ensure quality:

  • Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
  • Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
  • Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.

Frequently Asked Questions

dbt follows the ELT (Extract-Load-Transform) approach rather than ETL. Data is first loaded into the data warehouse, then transformed using dbt with SQL. This approach leverages the compute power of modern data warehouses and enables Data Analysts to manage their own transformations.
This training is designed for Data Analysts, Data Engineers and Analytics Engineers who want to structure their data transformations. Practical SQL knowledge is required (equivalent to our SQL fundamentals training). No prior dbt experience is necessary.
You should be comfortable with SQL basics: SELECT, JOIN, GROUP BY, aggregate functions and subqueries. If you're not familiar with these concepts, we recommend taking our SQL fundamentals training first.
The training covers dbt core concepts and mentions the dbt Analytics Engineering certification in the conclusion module. To specifically prepare for the certification exam, additional practice on real projects is recommended after the training.
The training alternates between theoretical presentations and hands-on labs. Each module is followed by practical exercises to immediately apply concepts: creating models, configuring sources, implementing tests and documentation. About 60% of the time is dedicated to labs.
Our training organizations SFEIR SAS and SFEIR-Est are Qualiopi certified for training activities, enabling funding through various European professional development schemes. Funding acceptance depends on your organization's policies. Contact us for a quote.

1,580€ excl. VAT

per learner