Confluent Developer Skills for Apache Kafka®
The lessons and activities in this course enable participants to build the skills to write Producers and Consumers, integrate Kafka with external systems using Kafka Connect, write streaming applications with Kafka Streams & ksqlDB, and integrate a Kafka client application with Confluent Cloud. Hands-on lab exercises follow the story of building and upgrading a driver location app, applying concepts directly to a working application. Exercises are available in Java, C# and Python.

What you will learn
- Write Producers and Consumers to send data to and read data from Kafka
- Integrate Kafka with external systems using Kafka Connect
- Write streaming applications with Kafka Streams & ksqlDB
- Integrate a Kafka client application with Confluent Cloud
Prerequisites
- Familiarity with developing professional apps in Java (preferred), C#, or Python.
- A working knowledge of the Apache Kafka architecture, either through prior experience or by taking the Confluent Fundamentals for Apache Kafka course.
Target audience
- Application developers and architects who want to write applications that interact with Apache Kafka.
Training Program
9 modules to master the fundamentals
Topics covered
- →Write code to connect to a Kafka cluster
- →Distinguish between leaders and followers and work with replicas
- →Explain what a segment is and explore retention
- →Use the CLI to work with topics, producers, and consumers
Topics covered
- →Describe the work a producer performs, and the core components needed to produce messages
- →Create producers and specify configuration properties
- →Explain how to configure producers to know that Kafka receives messages
- →Delve into how batching works and explore batching configurations
- →Explore reacting to failed delivery and tuning producers with timeouts
- →Use the APIs for Java, C#/.NET, or Python to create a Producer
Topics covered
- →Create and manage consumers and their property files
- →Illustrate how consumer groups and partitions provide scalability and fault tolerance
- →Explore managing consumer offsets
- →Tune fetch requests
- →Explain how consumer groups are managed and their benefits
- →Compare and contrast group management strategies and when you might use each
- →Use the API for Java, C#/.NET, or Python to create a Consumer
Topics covered
- →Describe Kafka schemas and how they work
- →Write an Avro compatible schema and explore using Protobuf and JSON schemas
- →Write schemas that can evolve
- →Write and read messages using schema-enabled Kafka client applications
- →Using Avro, the API for Java, C#/.NET, or Python, write a schema-enabled producer or consumer that leverages the Confluent Schema Registry
Topics covered
- →Develop an appreciation for what streaming applications can do for you back on the job
- →Describe Kafka Streams and explore steams properties and topologies
- →Compare and contrast steams and tables, and relate events in streams to records/messages in topics
- →Write an application using the Streams DSL (Domain-Specific Language)
Topics covered
- →Describe how Kafka Streams and ksqlDB relate
- →Explore the ksqlDB CLI
- →Use ksqlDB to filter and transform data
- →Compare and contrast types of ksqlDB queries
- →Leverage ksqlDB to perform time-based stream operations
- →Write a ksqlDB query that relates data between two streams or a stream and a table
Topics covered
- →List some of the components of Kafka Connect and describe how they relate
- →Set configurations for components of Kafka Connect
- →Describe connect integration and how data flows between applications and Kafka
- →Explore some use-cases where Kafka Connect makes development efficient
- →Use Kafka Connect in conjunction with other tools to process data in motion in the most efficient way
- →Create a Connector and import data from a database to a Kafka cluster
Topics covered
- →Delve into how compaction affects consumer offsets
- →Explore how consumers work with offsets in scenarios outside of normal processing behavior and understand how to manipulate offsets to deal with anomalies
- →Evaluate decisions about consumer and partition counts and how they relate
- →Address decisions that arise from default key-based partitioning and consider alternative partitioning strategies
- →Configure producers to deliver messages without duplicates and with ordering guarantees
- →List ways to manage large message sizes
- →Describe how to work with messages in transactions and how Kafka enables transactions
Topics covered
- →Compare and contrast error handling options with Kafka Connect, including the dead letter queue
- →Distinguish between various categories of testing
- →List considerations for stress and load test a Kafka system
Quality Process
SFEIR Institute's commitment: an excellence approach to ensure the quality and success of all our training programs. Learn more about our quality approach
- Lectures / Theoretical Slides — Presentation of concepts using visual aids (PowerPoint, PDF).
- Technical Demonstration (Demos) — The instructor performs a task or procedure while students observe.
- Guided Labs — Guided practical exercises on software, hardware, or technical environments.
The achievement of training objectives is evaluated at multiple levels to ensure quality:
- Continuous Knowledge Assessment : Verification of knowledge throughout the training via participatory methods (quizzes, practical exercises, case studies) under instructor supervision.
- Progress Measurement : Comparative self-assessment system including an initial diagnostic to determine the starting level, followed by a final evaluation to validate skills development.
- Quality Evaluation : End-of-session satisfaction questionnaire to measure the relevance and effectiveness of the training as perceived by participants.
Train multiple employees
- Volume discounts (multiple seats)
- Private or custom session
- On-site or remote