You are currently viewing Getting Started with Apache Kafka

Getting Started with Apache Kafka

  • Post author:
  • Post category:Java
  • Post comments:0 Comments
  • Post last modified:May 3, 2024


Apache Kafka is a distributed event streaming platform that enables the building of real-time data pipelines and streaming applications. In this tutorial, we’ll guide you through the basics of Apache Kafka, including installation, key concepts, and practical examples.


  1. Java Installed: Kafka is built with Java, so make sure you have Java installed on your machine.
  2. Apache ZooKeeper: Kafka relies on ZooKeeper for distributed coordination. You can either download and install ZooKeeper or use the built-in ZooKeeper in Kafka for development purposes.

Step 1: Install and Start Kafka:

  1. Download Apache Kafka from the official website:
  2. Extract the downloaded archive to your desired location.
  3. Navigate to the Kafka directory and start ZooKeeper (if not using an external one): bin/ config/
  4. In a new terminal, start the Kafka server: bin/ config/

Step 2: Create a Topic:

A Kafka topic is a category or feed name to which records are published. Let’s create a topic named “tutorial_topic.”

bin/ --create --topic tutorial_topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

Step 3: Produce Messages:

Use the Kafka producer to publish messages to the topic:

bin/ --topic tutorial_topic --bootstrap-server localhost:9092

Type messages, and press Enter to send them to the topic.

Step 4: Consume Messages:

Open a new terminal and use the Kafka consumer to read messages from the topic:

bin/ --topic tutorial_topic --from-beginning --bootstrap-server localhost:9092

You should see the messages you produced in Step 3.

Step 5: Java Producer and Consumer Example:

Let’s create a simple Java program to produce and consume messages.

Java Producer:

Java Consumer:


Congratulations! You’ve successfully set up Apache Kafka, created a topic, and produced and consumed messages using both the command line and a simple Java program. This tutorial provides a foundation for exploring more advanced Kafka features, such as partitions, replication, and stream processing. As you delve deeper into Kafka, you’ll discover its versatility in building scalable and resilient data streaming applications.

Leave a Reply