Have you considered the simplest way to write and read messages from Kafka? If not, in this article you will discover how you can use Kafka Console Producer to write records to a Kafka Topic straight from the command line. When you aren’t generating data to the Topics, producing messages from the command line is a terrific method to test new user applications quickly.

So what are you waiting for? Read along to know more about how Kafka Console Producer works. You will also learn key features offered by it and understand some easy steps to get started with it. At the end of this article, you will explore the best strategies you should know while working with Kafka Console Producers and Consumers.

What is Kafka Console Producer?

The Kafka console producer CLI helps to read data from standard input and publish it your Kafka.

Kafka Console Producer
Image Source

In a Kafka system, a producer sends data to the Kafka cluster, targeting a specific topic. Consumers, on the other hand, subscribe to a topic of interest and process the messages independently from other consumers. While producers are typically applications, the Kafka Console Producer utility enables users to manually send data to a topic through the command-line interface.

Easy Steps to Get Started with Kafka Console Producer Platform

Kafka Console Producer - Steps
Image Source

In this section, you will learn how to send and receive messages from the command line. Follow the steps below to work with Kafka Console Producer and produce messages:

Step 1: Set Up your Project

First, create a new directory at your desired location using the following command:

mkdir console-consumer-producer-basic && cd console-consumer-producer-basic
  • mkdir console-consumer-producer-basic: Creates a new directory named console-consumer-producer-basic.
  • cd console-consumer-producer-basic: Changes the current directory to console-consumer-producer-basic, navigating into the newly created folder.
  • These commands set up a folder where you can work on a console consumer-producer project.

Now, you need to set a docker-compose.yml file to get the Confluent platform. You can simply copy-paste the script below into the docker-compose.yml:

---
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
  • Version: Specifies version: '2' For Docker Compose, indicate the format version.
  • Services: Defines two services: zookeeper and broker.
  • Zookeeper Service:
    • Uses the image confluentinc/cp-zookeeper:6.2.1.
    • Sets the hostname and container name to zookeeper.
    • Exposes port 2181, which is the default Zookeeper port.
    • Sets environment variables:
      • ZOOKEEPER_CLIENT_PORT: Specifies the port as 2181.
      • ZOOKEEPER_TICK_TIME: Sets the ticking time to 2000 ms.
  • Broker Service:
    • Uses the image confluentinc/cp-kafka:6.2.1.
    • Sets the hostname and container name to broker.
    • Depends on the zookeeper service, ensuring Zookeeper starts first.
    • Exposes port 29092 (mapped to localhost).
    • Configures environment variables for Kafka:
      • KAFKA_BROKER_ID: Sets the broker ID to 1.
      • KAFKA_ZOOKEEPER_CONNECT: Connects to zookeeper:2181.
      • KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: Defines PLAINTEXT protocols.
      • KAFKA_ADVERTISED_LISTENERS: Advertises broker addresses (internal at 9092 and external at localhost:29092).
      • KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: Sets replication factor to 1.
      • KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: Sets initial rebalance delay to 0 ms.

Next, launch your Confluent platform using the following command:

docker-compose up -d

Learn how to install Kafka on:

Streamline Your Real-Time Data Integration!

Streaming data to Kafka is crucial for real-time analytics and seamless data integration across systems.
Hevo makes streaming data from Kafka effortless and reliable by providing:

  • Real-time data streaming.
  • A no-code platform for a three-step easy setup.
  • 150+ connectors including 60+ free sources.
Get our 14-day free trial and watch Hevo in action!

Step 2: Create the Kafka Topic

After starting the Kafka and Zookeeper services on the Confluent platform, let’s create a Kafka Topic. Enter the following command:

docker-compose exec broker kafka-topics --create --topic orders --bootstrap-server broker:9092

Read more in detail about creating Kafka Topics and Kafka Compacted topics here.

Step 3: Start a Kafka Console Consumer

Now, you need to set up the Kafka Console consumer to read the received records sent to the topic created above. So, continue in the same terminal and enter the following command to open a terminal on the broker container:

docker-compose exec broker bash

Now, enter the following command within this new terminal to start the Kafka Console Consumer:

kafka-console-consumer 
  --topic orders 
  --bootstrap-server broker:9092

Step 4: Produce your Records using Kafka Console Producer

Now that your Kafka Console Consumer is running, let’s publish some records using the Kafka Console Producer. So, open a new terminal and enter the following command to open another shell on the broker container:

docker-compose exec broker bash

In the new terminal that opens, enter the following command to run your Kafka Console Producer:

kafka-console-producer 
  --topic orders 
  --bootstrap-server broker:9092

Wait for a few seconds, your Kafka Console Producer will run smoothly. Then enter some strings which are considered as Records as shown below:

hevo
is
a
no code
data pipeline

Send all the records and check the consumer window. You will see the same output. Once you have received all the records from Kafka Console Producer, you can press Ctrl+C keys to stop your consumer.

Sync Data from Kafka to BigQuery
Sync Data from Kafka to Redshift
Sync Data from Kafka to Azure Synapse Analytics

Step 5: Send New Records from Kafka Console Producer

You can observe that as the Kafka Consumer was already running, you received the incoming records easily. However, there were some records published before Kafka Consumer started. To publish all those records as well you can use the –from-beginning command.

So, go back to your Kafka Console Producer and send a few records as shown below:

you are learning
to stream
all records
using Kafka Console Producer

Step 6: Start a New Consumer

After sending these records, start your Kafka Consumer again and enter the following command:

kafka-console-consumer 
  --topic orders 
  --bootstrap-server broker:9092 
  --from-beginning

Wait for a few seconds for Kafka Consumer to start. The following output will be displayed:

hevo
is
a
no code
data pipeline
you are learning
to stream
all records
using Kafka Console Producer

Once you have received all the records, you can close this consumer terminal using the Ctrl+C keys.

Step 7: Produce Records with Key-Value Pairs

If you have been working with Kafka, you might know that Kafka works with Key-Value pairs. In the previous steps, you have just sent the records having the values. Hence for all these records, the keys will be null. Let’s see how you can enter some valid keys. Before you begin, make sure you close the previous running Kafka Console Producer using Ctrl+C keys. 

Now, start a new Kafka Console Producer using the following command:

kafka-console-producer 
  --topic orders 
  --bootstrap-server broker:9092 
  --property parse.key=true 
  --property key.separator=":"

After the Kafka Console Producer starts, enter the following Key-Value pairs:

key1:programming languages
k1:python
k2:java

Step 8: Start a Consumer to display Key-Value Pairs

After sending the above records, start a new Kafka Console Consumer using the following command:

kafka-console-consumer 
  --topic orders 
  --bootstrap-server broker:9092 
  --from-beginning 
  --property print.key=true 
  --property key.separator=":"

Wait for a few seconds, your Consumer will start and display the following output:

null:hevo
null:is
null:a
null:no code
null:data pipeline
null:you are learning
null:to stream
null:all records
null:using Kafka Console Producer
key1:programming languages
k1:python
k2:java

You can observe that the records that were entered without keys have their keys set to null. You also observe that all the records from the beginning are being displayed. This was achieved using the –from-beginning command.

Great Work! You have gained a basic understanding of how to use Kafka Console Producer and Consumer to your advantage. To close the docker, you can use the docker-compose down command.

Create a Kafka Console Consumer

Using the console interface of Kafka, in this section of the blog, we shall learn in detail about how to create Kafka Consumer using the console interface. To create Kafka Producer and Consumer, it’s necessary to use “bin/kafka-console-producer.sh” and “bin/kafka-console-consumer.sh,” present in the Kafka Directory. Follow the steps to learn how:

Step 1: Let’s Begin With Starting Up Zookeeper and Kafka Cluster.

First, navigate through the root of the Kafka Directory and run the following command, each of them in separate terminals to kick-start Zookeeper and Kafka Cluster, respectively.

$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/kafka-server-start.sh config/server.properties

Alternatively, you can also start Apache Kafka using KRaft. Below are the steps to start Kafka using KRaft:

Begin by generating a Cluster UUID

$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"

Next, format the Log Directories

$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties

And start the Kafka Server

$ bin/kafka-server-start.sh config/kraft/server.properties

You will now be ready to use the Kafka environment once the server launches successfully.

If you wish to learn more on how to run Kafka without Zookeeper, read: Installing Apache Kafka without Zookeeper: Easy Steps 101

Step 2: Now Create a Kafka Topic.

Run the below-given command to create a topic named “sampleTopic.”

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sampleTopic

Step 3: Next, It’s Time To Create Kafka Console Producer.

Run the below-given command. The command will signal to kick-start Kafka Producer, writing to sampleTopic

$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic sampleTopic

Step 4: Create a Kafka Console Consumer.

Run the below-given command. The command will signal to kick-start Kafka Producer, subscribed to sampleTopic.

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sampleTopic --from-beginning

Step 5: At Last, Send Messages.

Now, you can start sending messages from the producers. As soon as you start sending out the messages, the consumer shall start getting messages via Kafka Topic.

Kafka Console Producer: Step 5
Image source

Some mistakes to avoid while using the kafka-console-producer.sh command:

  • By default, messages are sent with a null key (alternative options are available).
  • If the targeted topic does not already exist, Kafka can automatically create it under certain conditions.
  • Specifically, if you specify a non-existent topic name, a new topic with that name will be created using the default number of partitions and replication factor configured in the Kafka cluster.

How can messages be generated from a file using the Kafka Console Producer CLI?

Let’s take an example file kafka-console.txt(ensure that every message is on a new line).

This is a test
This is a test

Generate messages to the topic from the file (refer to the end of the command).

kafka-console-producer.sh --bootstrap-server localhost:9092 --topic first_topic < kafka-console.txt

Use Cases for Kafka Console Producer

1. Testing

For this example, let us assume that you have deployed an Apache Cluster and want to use Avro for serializing data. How will you make sure that your Avro is set up successfully?

Let’s begin with sending a message using the Kafka Console Avro Producer.

$ kafka-avro-console-producer \
  --bootstrap-server localhost:9092 \
  --topic example-topic \
  --property value.schema='{"type":"record","name":"random_record","fields":[{"name":"hello","type":"string"}]}'
>{"hello": "world"}
>^C

Now, using the Kafka console Avro consumer, we can verify if the message made it through.

$ kafka-avro-console-consumer \
  --bootstrap-server localhost:9092 \
  --topic example-topic \
  --property value.schema='{"type":"record","name":"random_record","fields":[{"name":"hello","type":"string"}]}'
  --from-beginning
{"hello": "world"}
Processed a total of 1 messages

2. Backfill Data

For this example, let’s consider that you have a backlog of orders for a particular product that was supposed to go to a topic called orders. However, that batch of orders did not reach the topic’s consumer. You now have a CSV file named backorder.csv that goes like this:

order_id,product_id,product_name,client_id,client_name
201,1242,54” TV,540,Michael Cassio
202,3245,Vacuum,354,Jan Zizka
203,3245,Vacuum,203,Carlos Castillo Armas
204,9812,Remote control,540,Michael Cassio
......

This file goes on for thousands of lines. It is impossible to enter these backlogs to console producer manually. Kafka console producer solves this by easily loading data from a file. First begin by preparing the data and removing the header:

$ tail -n +2 backorder.csv > prepared_backorder.csv

Load the data into console producer using the following command:

$ kafka-console-producer \
    --topic example-topic \
    --bootstrap-server localhost:9092 \
    < prepared_backorder.csv

Your backlog entry is now successful.

Top Strategies Kafka Developers must know when Processing Data on the Kafka Console Producer Platform

Many Fortune 500 firms use Apache Kafka as an Event Streaming platform. Kafka has many features that make it the de facto standard for Event Streaming platforms. In this part, you’ll learn about some of the most important strategies to remember when dealing with Kafka Console Producer.

1) Understand Message Delivery, Acknowledgment & Durability

Kafka Console Producer - Acknowledgements
Image Source

The Kafka Producer has the acks configuration parameter for data durability. The acks parameter determines how many acknowledgments the producer must get before a record is considered delivered to the broker. The following are the possibilities offered:

  • none: When the producer transmits the records to the broker, it deems them effectively delivered. This can be represented as a “fire and forget” strategy.
  • one: The producer waits for the lead broker to confirm that the record has been written to its log.
  • all: The producer waits for acknowledgment from the lead broker and subsequent brokers that the record has been successfully written to their logs.

The lead broker will not try to add the record to its log if the number of replicas in sync is less than the predefined amount. The producer is forced to retry the write because the leader raises a NotEnoughReplicasException or a NotEnoughReplicasAfterAppendException. Since having replicas out of sync with the leader is not good, the producer will continue to retry and send the records until the delivery timeout is reached. You can extend the durability of your data by configuring min.insync.replicas and producer acks to operate together in this fashion.

2) Explore the Sticky Partitioner in the Producer API

Kafka Console Producer - Sticky Partitioner
Image Source

The Kafka Producer and Consumer APIs have introduced several new functionalities in the last years that every Kafka developer should be aware of. Instead of employing a round-robin technique per record, the Sticky Partitioner allocates records to the same partition until the batch is dispatched. The Sticky Partitioner then increments the partition to use for the following batch after delivering a batch.

You will submit fewer produce requests if you use the same partition until a batch is full or otherwise completed. This helps to decrease the load on the request queue and reduces system latency. It’s worth noting that the Sticky Partitioner still ensures that records are distributed evenly. As the Partitioner distributes a batch to each partition, the even distribution happens over time. It’s similar to a “per-batch” round-robin or “eventually even” strategy.

3) Master the Command Line Tools

The bin directory in the Apache Kafka binary installation contains various utilities. Apart from the Kafka Console Producer covered in this article, you should be familiar with console-consumer, dump-log, and other commands in that directory.

  • Kafka Console Consumer: You can consume records from a Kafka Topic straight from the command line using the Kafka Console Consumer. When developing or debugging, starting a consumer immediately can be quite useful. Simply execute the following command to verify that your producer application is delivering messages to Kafka Console Consumer:
kafka-console-consumer --topic  
                          --bootstrap-server <broker-host:port>
  • Dump Log: When working with Kafka, you may need to manually analyze the underlying logs of a Topic from time to time. The kafka-dump-log command is your buddy whether you’re merely interested in Kafka internals or you need to troubleshoot a problem and validate the content. Here’s a command for viewing the log of an example Topic:
 kafka-dump-log 
  --print-data-log  
  --files  ./var/lib/kafka/data/example-0/00000000000000000000.log 

Kafka provides various other features and capabilities for its Kafka Console Producer and Consumer. To explore other strategies read the Top Things Every Apache Kafka Developer Should Know when Processing Data on the Kafka Console Producer and other Kafka Platforms.

Conclusion

This article helped you understand Kafka Console Producer. Through this article, you discovered some good strategies and capabilities that Kafka offers to its Developers for Kafka Console Producer and Consumer.

To take a deep dive into more concepts related to Kafka, you might want to read:

However, when streaming data from various sources to Apache Kafka or vice versa, you might face various challenges. With increased data volume, Kafka configuration and maintenance may become complex. If you are facing these challenges and are looking for some solutions, then check out a simpler alternative like Hevo. Sign up for Hevo’s 14-day free trial and experience seamless data migration.

Feel free to share your working experience with Kafka Console Producer with us in the comments section below!

FAQs

What is the use of Kafka console producer?

The Kafka console producer is a command-line tool used to send messages to a Kafka topic. It allows users to quickly publish data to Kafka for testing or debugging purposes without writing custom code.

Does Kafka have a console?

Yes, Kafka has a console interface that includes various command-line tools, such as the Kafka console producer and Kafka console consumer.

What is a Kafka producer?

A Kafka producer is a client application that sends data (messages) to Kafka topics. Producers are responsible for publishing records to Kafka, and they can choose which topic to send the data to, as well as manage message serialization and partitioning.

Shubhnoor Gill
Research Analyst, Hevo Data

Shubhnoor is a data analyst with a proven track record of translating data insights into actionable marketing strategies. She leverages her expertise in market research and product development, honed through experience across diverse industries and at Hevo Data. Currently pursuing a Master of Management in Artificial Intelligence, Shubhnoor is a dedicated learner who stays at the forefront of data-driven marketing trends. Her data-backed content empowers readers to make informed decisions and achieve real-world results.