How to Use the Kafka Console Producer: 8 Easy Steps

• January 21st, 2022

kafka console producer

Have you considered the simplest way to write and read messages from Kafka? If not, in this article you will discover how you can use Kafka Console Producer. You can use the Kafka Console Producer to write records to a Kafka Topic straight from the command line. When you aren’t generating data to the Topics, producing messages from the command line is a terrific method to quickly test new user applications.

So what are you waiting for? Read along to know more about how Kafka Console Producer works. You will also learn key features offered by it and understand some easy steps to get started with it. At the end of this article, you will explore the best strategies you should know while working with Kafka Console Producers and Consumers.

Table of Contents

What is Apache Kafka?

Kafka Console Producer - Kafka Logo
Image Source

Apache Kafka is a Distributed Event Streaming solution that enables applications to efficiently manage billions of events. The Java and Scala-based framework supports a Publish-Subscribe Messaging system that accepts Data Streams from several sources and allows real-time analysis of Big Data streams. It can quickly scale up with minimal downtime. 

Kafka’s global appeal has grown as a result of its low data redundancy and high fault tolerance. More than 60% of the Fortune 100 organizations use Kafka. Kafka was created by LinkedIn, which utilizes it to monitor activity data and operational analytics.  It’s used by Twitter as part of Storm’s stream processing architecture. Moreover, Square employs Kafka as a bus to transport all system events to multiple Square data centers as well as outputs to Splunk and Graphite.  Uber, Goldman Sachs, NetFlix, PayPal, Box, Spotify, Cisco, and a plethora of other organizations use Kafka.

Key Features of Apache Kafka

Kafka is a trusted platform for enabling and developing businesses. Let’s have a look at some of the powerful features which makes Kafka so popular:

  • High Scalability: Kafka’s partitioned log model distributes data over several servers, allowing it to extend beyond the capabilities of a single server. Kafka has low latency and great throughput since it separates data streams.
  • Fault-Tolerant & Durable: By distributing partitions and replicating data over several servers, Kafka protects data from server failure and makes it fault-tolerant. It can restart the server by itself.
  • Robust Integrations: Kafka supports various third-party connectors. It also offers many APIs. Hence, you can add more features in a matter of seconds. Take a look at how you can use Kafka with Elasticsearch, Cassandra, and Snowflake.
  • Comprehensive Analysis: For tracking operational data, Kafka is a popular solution. It enables you to collect data from several platforms in real-time and organize it into consolidated feeds while keeping a check with metrics. Refer to the Real-time Reporting with Kafka Analytics article for further information on how to analyze your data in Kafka.

Want to explore more about Apache Kafka? You can visit the Kafka website or refer to Kafka documentation

What is Kafka Console Producer?

Kafka Console Producer - Introduction
Image Source: Self

Kafka Console Producer (kafka-console-producer) is one of the utilities that comes with Kafka packages. It is used to write data to a Kafka Topic using standard input or the command line.  When you type anything into the console, kafka-console-producer writes it to the cluster. Topics are made up of Partitions where the data is written by the Producers. You can run the Kafka Console Producer, by using the following command:

kafka-console-producer --topic  
                        --broker-list <broker-host:port> 

Kafka supports 2 types of Producers, these are:

  • Sync Producers: These send messages directly in the background.
  • Async Producers: Messages are sent when the amount of messages with a greater throughput is achieved.

Key Features of Kafka Console Producer

The idempotency of the Kafka Console Producer improves delivery semantics from at least once to exactly-once delivery. It also employs a transactional mode, which lets a program send messages to various Partitions, including a Kafka Topic. Let’s explore other powerful features of Kafka Console Producer:

  • Thread-Safe: Each Kafka Console Producer has a buffer space pool where records that haven’t yet been transferred to the server are stored. The I/O thread is used to deliver these records to the cluster as a request.
  • Durable: The acknowledgments (acks) are in charge of defining the conditions by which the request is regarded as complete. Kafka Console Producers offers 3 types of acknowledgments (acks) that have been discussed further in the article.
  • Scalable: For each Kafka Partition, the Producer keeps a buffer of unsent records. These buffers are supplied in accordance with the batch size, which can manage a high number of messages at once. To read more about partitions, refer to Kafka Partitions Guide.
  • Fault-Tolerant: When a node fails, the producer has an important feature that allows it to offer resistance to the node and instantly recover it.

Read along to know how you can leverage Kafka Console Producer to send messages to Kafka Console Consumer.

Simplify Kafka ETL and Data Analysis with Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline, helps load data from any data source such as Databases, SaaS applications, Cloud Storage, SDK,s, and Streaming Services and simplifies the ETL process. It supports 100+ Data Sources including Apache Kafka, Kafka Confluent Cloud, and other 40+ Free Sources. You can use Hevo Pipelines to replicate the data from your Apache Kafka Source or Kafka Confluent Cloud to the Destination system. It loads the data onto the desired Data Warehouse/destination and transforms it into an analysis-ready form without having to write a single line of code.

Hevo’s fault-tolerant and scalable architecture ensures that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. Hevo supports two variations of Kafka as a Source. Both these variants offer the same functionality, with Confluent Cloud being the fully-managed version of Apache Kafka.

GET STARTED WITH HEVO FOR FREE

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled securely and consistently with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.

Simplify your ETL & Data Analysis with Hevo today! 

SIGN UP HERE FOR A 14-DAY FREE TRIAL!

Easy Steps to Get Started with Kafka Console Producer Platform

Kafka Console Producer - Steps
Image Source

So are you eager to get started with Kafka and want to rapidly create and consume some simple messages? In this section, you will learn how to send and receive messages from the command line. Follow the steps below to work with Kafka Console Producer and produce messages:

Step 1: Set Up your Project

First, create a new directory at your desired location using the following command:

mkdir console-consumer-producer-basic && cd console-consumer-producer-basic

Now, you need to set a docker-compose.yml file to get the Confluent platform. You can simply copy-paste the below script in the docker-compose.yml:

---
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0

Next, launch your Confluent platform using the following command:

docker-compose up -d

Step 2: Create the Kafka Topic

After starting the Kafka and Zookeeper services on the Confluent platform, let’s create a Kafka Topic. Enter the following command:

docker-compose exec broker kafka-topics --create --topic orders --bootstrap-server broker:9092

Step 3: Start a Kafka Console Consumer

Now, you need to set up the consumer to read the received records sent to the topic created above. So, continue in the same terminal and enter the following command to open a terminal on the broker container:

docker-compose exec broker bash

Now, enter the following command, within this new terminal to start the Kafka Console Consumer:

kafka-console-consumer 
  --topic orders 
  --bootstrap-server broker:9092

Step 4: Produce your Records using Kafka Console Producer

Now that your Kafka Console Consumer is running, let’s publish some records using the Kafka Console Producer. So, open a new terminal and enter the following command to open another shell on the broker container:

docker-compose exec broker bash

In the new terminal that opens, enter the following command to run your Kafka Console Producer:

kafka-console-producer 
  --topic orders 
  --bootstrap-server broker:9092

Wait for a few seconds, your Kafka Console Producer will run smoothly. Then enter some strings which are considered as Records as shown below:

hevo
is
a
no code
data pipeline

Send all the records and check the consumer window. You will see the same output. Once you have received all the records from Kafka Console Producer, you can press Ctrl+C keys to stop your consumer.

Step 5: Send New Records from Kafka Console Producer

You can observe that as the Kafka Consumer was already running you received the incoming records easily. However, what is there were some records published before Kafka Consumer started. To publish all those records as well you can use the –from-beginning command.

So, go back to your Kafka Console Producer and send a few records as shown below:

you are learning
to stream
all records
using Kafka Console Producer

Step 6: Start a New Consumer

After sending these records, start your Kafka Consumer again and enter the following command:

kafka-console-consumer 
  --topic orders 
  --bootstrap-server broker:9092 
  --from-beginning

Wait for a few seconds for Kafka Consumer to start. The following output will be displayed:

hevo
is
a
no code
data pipeline
you are learning
to stream
all records
using Kafka Console Producer

Once you have received all the records, you can close this consumer terminal using the Ctrl+C keys.

Step 7: Produce Records with Key-Value Pairs

If you have been working with Kafka, you might know that Kafka works with Key-Value pairs. In the previous steps, you have just sent the records having the values. Hence for all these records, the keys will be null. Let’s see how you can enter some valid keys. Before you begin, make sure you close the previous running Kafka Console Producer using Ctrl+C keys. 

Now, start a new Kafka Console Producer using the following command:

kafka-console-producer 
  --topic orders 
  --bootstrap-server broker:9092 
  --property parse.key=true 
  --property key.separator=":"

After the Kafka Console Producer starts, enter the following Key-Value pairs:

key1:programming languages
k1:python
k2:java

Step 8: Start a Consumer to display Key-Value Pairs

After sending the above records, start a new Kafka Console Consumer using the following command:

kafka-console-consumer 
  --topic orders 
  --bootstrap-server broker:9092 
  --from-beginning 
  --property print.key=true 
  --property key.separator=":"

Wait for a few seconds, your Consumer will start and display the following output:

null:hevo
null:is
null:a
null:no code
null:data pipeline
null:you are learning
null:to stream
null:all records
null:using Kafka Console Producer
key1:programming languages
k1:python
k2:java

You can observe that the records that were entered without keys have their keys set to null. You also observe that all the records from the beginning are being displayed. This was achieved using the –from-beginning command.

Great Work! You have gained a basic understanding of how you can use Kafka Console Producer and Consumer to your advantage. To close the docker, you can use the docker-compose down command.

Create a Kafka Console Consumer

Kafka Console Producer: Create Kafka Console Consumer
Create Kafka Console Consumer

Using the console interface of Kafka, in this section of the blog, we shall learn in detail about how to create Kafka Consumer using the console interface. To create Kafka Producer and Consumer, it’s necessary to use “bin/kafka-console-producer.sh” and “bin/kafka-console-consumer.sh,” present in the Kafka Directory. Follow the steps to learn how:

Step 1: Let’s Start With Starting Up Zookeeper and Kafka Cluster.

First, navigate through the root of the Kafka Directory and run the following command, each of them in separate terminals to kick-start Zookeeper and Kafka Cluster respectively.

$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/kafka-server-start.sh config/server.properties

Step 2: Now Create a Kafka Topic.

Run the below-given command to create a topic named “sampleTopic.”

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sampleTopic

Step 3: Next, It’s Time To Create Kafka Console Producer.

Run the below-given command. The command will signal to kick-start Kafka Producer, writing to sampleTopic

$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic sampleTopic

Step 4: Create a Kafka Console Consumer.

Run the below-given command. The command will signal to kick-start Kafka Producer, subscribed to sampleTopic.

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sampleTopic --from-beginning

Step 5: At Last, Send Messages.

Now, you can start sending messages from the producers. As soon as you start sending out the messages, the consumer shall start getting messages via Kafka Topic.

Kafka Console Producer: Step 5
Send Messages

Top Strategies Kafka Developers must know when Processing Data on the Kafka Console Producer Platform

Many Fortune 500 firms use Apache Kafka as an Event Streaming platform. Kafka has many features that make it the de-facto standard for Event Streaming platforms. In this part, you’ll learn about some of the most important strategies to keep in mind when dealing with Kafka Console Producer.

1) Understand Message Delivery, Acknowledgment & Durability

Kafka Console Producer - Acknowledgements
Image Source

The Kafka Producer has the acks configuration parameter for data durability. The acks parameter determines how many acknowledgments the producer must get before a record is considered delivered to the broker. The following are the possibilities offered:

  • none: When the producer transmits the records to the broker, it deems them effectively delivered. This can be represented as a “fire and forget” strategy.
  • one: The producer waits for the lead broker to confirm that the record has been written to its log.
  • all: The producer waits for acknowledgment from the lead broker and subsequent brokers that the record has been successfully written to their logs.

The lead broker will not try to add the record to its log if the number of replicas in sync is less than the predefined amount. The producer is forced to retry the write because the leader raises a NotEnoughReplicasException or a NotEnoughReplicasAfterAppendException. Since having replicas out of sync with the leader is not good, the producer will continue to retry and send the records until the delivery timeout is reached. You can extend the durability of your data by configuring min.insync.replicas and producer acks to operate together in this fashion.

2) Explore the Sticky Partitioner in the Producer API

Kafka Console Producer - Sticky Partitioner
Image Source

The Kafka Producer and Consumer APIs have introduced several new functionalities in the last years that every Kafka developer should be aware of. Instead of employing a round-robin technique per record, the Sticky Partitioner allocates records to the same partition until the batch is despatched. The Sticky Partitioner then increments the partition to use for the following batch after delivering a batch.

You will submit fewer produce requests if you use the same partition until a batch is full or otherwise completed. This helps to decrease the load on the request queue and reduces system latency. It’s worth noting that the Sticky Partitioner still ensures that records are distributed evenly. As the Partitioner distributes a batch to each partition, the even distribution happens over time. It’s similar to a “per-batch” round-robin or “eventually even” strategy.

3) Master the Command Line Tools

The bin directory in the Apache Kafka binary installation contains various utilities. Apart from the Kafka Console Producer covered in this article, you should be familiar with console-consumer, dump-log, and other commands in that directory.

  • Kafka Console Consumer: You can consume records from a Kafka Topic straight from the command line using the Kafka Console Consumer. When developing or debugging, being able to immediately start a consumer can be quite useful. Simply execute the following command to verify that your producer application is delivering messages to Kafka Console Consumer:
kafka-console-consumer --topic  
                          --bootstrap-server <broker-host:port>
  • Dump Log: When working with Kafka, you may need to manually analyze the underlying logs of a Topic from time to time. The kafka-dump-log command is your buddy whether you’re merely interested in Kafka internals or you need to troubleshoot a problem and validate the content. Here’s a command for viewing the log of an example Topic:
 kafka-dump-log 
  --print-data-log  
  --files  ./var/lib/kafka/data/example-0/00000000000000000000.log 

Kafka provides various other features and capabilities for its Kafka Console Producer and Consumer. To explore other strategies read the Top Things Every Apache Kafka Developer Should Know when Processing Data on the Kafka Console Producer and other Kafka Platforms.

Conclusion

This article helped you understand Kafka Console Producer. You started by learning about Apache Kafka and its features. You also understood the key features of Kafka Console Producer and how you can leverage it to send messages easily with just a few lines of commands. At the end of this article, you discovered some good strategies and capabilities that Kafka offers to its Developers for Kafka Console Producer and Consumer.

However, when streaming data from various sources to Apache Kafka or vice versa, you might face various challenges. With the increase in data volume, Kafka configuration and maintenance may become complex. If you are facing these challenges and are looking for some solutions, then check out a simpler alternative like Hevo.

Hevo Data is a No-Code Data Pipeline that offers a faster way to move data from 100+ Data Sources including Apache Kafka, Kafka Confluent Cloud, and other 40+ Free Sources, into your Data Warehouse to be visualized in a BI tool. You can use Hevo Pipelines to replicate the data from your Apache Kafka Source or Kafka Confluent Cloud to the Destination system. Hevo is fully automated and hence does not require you to code. 

VISIT OUR WEBSITE TO EXPLORE HEVO

Want to take Hevo for a spin?

SIGN UP and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Feel free to share your experience of working with Kafka Console Producer with us in the comments section below!

No-Code Data Pipeline For Apache Kafka