Building the Kafka Python Client: Easy Steps & Working 101

|

Kafka Python

Kafka is a Distributed Event Streaming platform capable of storing and processing trillions of Messages or Data Streams per day. Users can access such instantaneous and real-time data to build event-driven applications. This article talks about the Kafka Python environment to function like a Python interface.

The Kafka Ecosystem consists of various Clusters that run a set of Kafka Servers or Brokers to store and manage Real-time Event Data. Usually, users separately create Producer, and Consumer consoles using Command-line Prompts to Publish and Consume messages, respectively.

However, due to the extensive growth of the user community worldwide, Kafka can be used with the Python programming language to build end-to-end Kafka Python Client Applications. To build a Kafka Python Client using Python code, you have to use one of the many libraries available in the market that simplify building Kafka applications by Producing and Consuming messages.

In this article, you will learn about Apache Kafka, how to install and use Kafka Python Client Libraries for building a simple end-to-end Kafka application. 

Table of Contents

Prerequisites

  • Understanding of Streaming Data.

What is Apache Kafka?

 Kafka Python - Kafka logo
Image Source

Originally developed by Linkedin, Apache Kafka is an Open-source and Distributed Stream Processing platform that collects, stores, organizes, and manages real-time data. In other words, Apache Kafka is a Message Streaming Platform that allows you to develop event-driven applications using real-time streaming information.

It can stream trillions of Data or Events per day, which can be further processed and analyzed to develop Data-driven or Event-driven Applications. Apache Kafka is also known as a Publish-Subscribe Messaging Service because it allows Producers (Data Publishers) and Consumers (Data Subscribers) to read and write messages (data) to and from the Apache Kafka Servers based on their use cases or requirements.

The Kafka Ecosystem comprises a Distributed and Dedicated set of Servers working together to effectively store and manage the incoming real-time data. Because of the ability to achieve maximum throughput by its distributive nature, Kafka is being used by the most prominent companies in the world.

According to the Enlyft report, more than 20,500 organizations, including Fortune 500 companies like Uber, Airbnb, and Netflix use Apache Kafka for event streaming activities.

How to Set up the Apache Kafka Environment?

Kafka Python - Apache Kafka Cluster
Image Sources

Before working with Kafka-Python, ensure that you have installed Kafka and Zookeeper to start Kafka’s End-to-End Streaming Environment. 

A) Understand the Prerequisites

As Apache Kafka is based on Java Virtual Machine (JVM) languages like Scala and Java, you should confirm that Java 8+ is installed and running on your computer. Further, set up the Java File Path and Java_Home environment variables for allowing your operating system to point towards the Java utilities, thereby making Apache Kafka compatible with the Java Runtime Environment (JRE). 

After these prerequisites are satisfied, you have to set up and run the Kafka Server to act as a Standalone broker for storing and managing messages collected from Producers. 

B) Start your Zookeeper Instance

You have to start a Zookeeper Instance along with the Apache Kafka Server because Kafka depends on Zookeeper for storing all the metadata about Kafka Producers, Brokers, and Consumers. Zookeeper also acts as an orchestration or coordination service for administering the status of each broker within the Kafka distributed system.

For such constraints, starting and running Kafka and Zookeeper Instances on your local machine is mandatory to proceed with further steps and build the end-to-end Kafka Python client. 

Open a new command prompt terminal and run the following command to start the Zookeeper instance:

bin/zookeeper-server-start.sh config/zookeeper.properties

After executing the above command, you will get an output similar to the image below:

 Kafka Python - zookeeper code
Image Source

C) Start your Apache Kafka Server

The next step is to start and run the Kafka Server. Open another command prompt terminal and execute the following command:

bin/kafka-server-start.sh config/server.properties

After executing the above command, you will get an indication in the command terminal displaying the success message as, “started kafka.server.KafkaServer.“ 

Usually, in the Apache Kafka environment, a Zookeeper instance will be bound and running on localhost port 2181, while Kafka Server runs on the localhost 9092 port.

D) Create a New Topic in Apache Kafka

On executing the above steps, you successfully started the Zookeeper and Kafka instances on your local machine. Ensure that you do not close or terminate both the command terminals currently running the Kafka and Zookeeper instances.

Now, you have to create new Topics in the Kafka Server, which stores and organizes all messages published or received from Kafka Producers. You will use newly created topics in the future for building a Kafka Python client using the end-to-end environment. 

For creating a new Topic in Kafka, open a new command terminal and execute the following command. 

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test_topic

From the above command, you create a new topic, “test_topic,“ with a single partition and replication factor. Producers and Consumers can use this topic name to send and receive messages to and fro the Kafka Servers while building a Kafka Python client. 

E) Test your Apache Kafka Environment

Now, you have to test whether you have correctly started the Kafka-Zookeeper instances and properly created topics. For the testing purpose, you can start the Kafka Producer Console for writing or publishing messages to Brokers, while Kafka Consumer console for reading or fetching messages from brokers. 

You can start the Producer Console for writing messages into Kafka Brokers. Open a new command terminal and execute the following command. 

kafka-console-producer.sh --broker-list localhost:9092 --topic test_topic

Now, start the Consumer Console for reading messages from the Kafka brokers. Open a new command prompt and execute the following command.

kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test_topic --from-beginning

After executing the above steps, both producer and consumer consoles of Kafka are started and running successfully. From this, you can ensure that the Kafka topic is created and Kafka-Zookeeper instances are running successfully. 

Now, you are all set to build and implement Kafka Python Client for receiving and consuming messages to and fro the Kafka servers by writing code in Python using your desired IDEs or frameworks. 

Simplify Kafka ETL with Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline, helps load data from any data source such as Databases, SaaS applications, Cloud Storage, SDK,s, and Streaming Services and simplifies the ETL process. It supports 150+ Data Sources including Apache Kafka, Kafka Confluent Cloud, and other 40+ Free Sources.

You can use Hevo’s Data Pipelines to replicate the data from your Apache Kafka Source or Kafka Confluent Cloud to the Destination system. It loads the data onto the desired Data Warehouse/Destination and transforms it into an analysis-ready form without having to write a single line of code.

Hevo’s fault-tolerant and scalable architecture ensures that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. Hevo supports two variations of Apache Kafka as a Source. Both these variants offer the same functionality, with Confluent Cloud being the fully-managed version of Apache Kafka.

GET STARTED WITH HEVO FOR FREE

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled securely and consistently with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built to Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
SIGN UP HERE FOR A 14-DAY FREE TRIAL!

How to Install the Kafka Python Library?

Multiple libraries are available in the Python programming language, which acts as a Python-Kafka Client for building Kafka Applications. Such libraries provide you with the required methods to produce, stream and consume messages from Kafka.

You can install such libraries globally or within your virtual environment according to your use cases. Some of the Kafka-Python client library methods include Kafka Producer and Kafka Consumer.

With the predefined Kafka Producer method, you can produce or publish messages into the Kafka server using Python code. Similarly, you can consume or fetch messages from the Kafka server using the Kafka consumer method. Such methods are solely responsible for creating Producer and Consumer Consoles of Kafka instead of creating it using a command-line tool.  

Some of the libraries for building Python Kafka Client are kafka-python, pykafka and confluent-kafka. Each library has its features, functionalities, and performance rates. According to your use cases and requirements, you can choose the appropriate library to maximize Kafka’s throughput.  

Now, you are ready to install the respective Python library for building Kafka applications. Before installing the Kafka-Python client library, you should satisfy some prerequisites.

Ensure that you have downloaded and installed the latest Python version or any version greater than Python 3.6. You should also ensure that the pip package is pre-installed in Python. Pip is a Python package or source management library that allows users to install and manage software packages or libraries from any open-source or online repositories. 

For executing Python code, you should have an Integrated Development Environment (IDE) like Visual Code Studio, Intellij, or Jupyter Notebooks to run Python code for building a Kafka-Python client. 

Since there are various Kafka Python client libraries, this article focuses on Kafka-Python, an open-source Python client for building Kafka applications.

pip install kafka-python

However, if you are running Python code in the Jupyter Notebook, you should restart the kernel to import the Kafka-Python library successfully.

On executing the above commands of Kafka Python tutorial, you have successfully installed the Kafka-Python client library.

How to Build the Kafka Python Client?

In the below steps, you will learn how to Produce and Consume messages using Python code:

Step 1: Open your IDE and import the necessary methods from the library to build the producer and consumer consoles for producing and receiving simple messages. Initially, you can create a Consumer Console using Python, which receives all messages sent from the producer. You can name the python file for creating Kafka consumer as “consumer.py

Step 2: Execute the code given below for starting the Kafka Consumer panel. 

from kafka import KafkaConsumer
consumer = KafkaConsumer('test_topic')
for message in consumer:
print (message)
  • In the above code, “KafkaConsumer” is the client object responsible for consuming or receiving messages from producers.
  • “test_topic” is the topic from which the consumer receives messages from the producer side.

Step 3: After executing the above code, your consumer is now ready to receive messages from the producer. Now, execute the below command to create a Producer Console using Python. You can name the Python file for creating Kafka producer as “producer.py

from kafka import KafkaProducer

Step 4: After importing the desired libraries, you have to write the main method for starting a Kafka producer. Execute the following command.

producer = KafkaProducer(bootstrap_servers='localhost:9092')
  • In the above code, “KafkaProducer” is the client object responsible for publishing or producing messages into the Kafka server. The “bootstrap_servers=’localhost:9092” parameter sets the host and port for a producer to pull the metadata of Kafka clusters. This parameter is not mandatory because the default port for the Kafka server is localhost:9092.

Now, you are ready to send messages to the Consumer console. You can send any sample message from the producer side. i.e., “producer.py” file. Execute the command given below to send the simple statement “Hello, World!” to the Kafka Consumer.

producer.send(‘test_topic’, b'Hello, World!')

By executing this above command, you send a ‘Hello, World!’ message to the topic called “test_topic.” A Kafka consumer will fetch the same message from the same topic name. Now, when you open the “consumer.py” file that acts as a consumer, you can see the  “Hello, World!” message present, which is sent from the Kafka producer using Kafka Python.

With this method, you successfully built a simple Kafka Python Client to send and receive messages using Python code!

Conclusion

In this article, you have learned how to send a simple message from the Python Producer Console and instantly receive messages in the Python Consumer Console and How to use kafka in Python. However, using the Kafka Python Client, you can send large or high-end messages from producers to consumers by having Kafka Servers as a Mediator.

As this article only focuses on the Kafka-Python library for building Kafka Python client applications, you can use other libraries like “pykafka” and “confluent-Kafka” to compare each library’s performance and throughput rates.

Extracting complicated data from Apache Kafka, on the other hand, can be Difficult and Time-Consuming. If you’re having trouble with these and want to find a solution, Hevo Data is a good place to start!

VISIT OUR WEBSITE TO EXPLORE HEVO

Hevo Data is a No-Code Data Pipeline that offers a faster way to move data from 150+ Data Sources including Apache Kafka, Kafka Confluent Cloud, and other 40+ Free Sources, into your Data Warehouse to be visualized in a BI tool. You can use Hevo’s Data Pipelines to replicate the data from your Apache Kafka Source or Kafka Confluent Cloud to the Destination system. Hevo is fully automated and hence does not require you to code

Want to take Hevo for a spin? SIGN UP for a 14-day Free Trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable Hevo Pricing that will help you choose the right plan for your business needs.

Share your experience of learning about Building the Kafka Python Client in the comments section below!

mm
Freelance Technical Content Writer, Hevo Data

Ishwarya has experience working with B2B SaaS companies in the data industry and her passion for data science drives her to product informative content to aid individuals in comprehending the intricacies of data integration and analysis.

No-code Data Pipeline For Apache Kafka