Table of Contents
What is event-driven architecture?
An Event Driven Architecture (EDA) is a microservices-based architectural paradigm that is becoming more prominent with the rise of Kafka event driven architecture. This isn’t just a coincidence. From the developer standpoint, EDA provides an efficient technique of linking microservices, which can aid in developing future-proof systems.
Furthermore, event-driven systems, when integrated with robust streaming services such as Apache Kafka, become more agile, durable, and efficient than prior messaging approaches.
This article will help you understand the reason behind the rising popularity of Event Driven Architecture. You will discover more about the basic components and frequently used patterns for building the Event-Driven Architecture.
Why Use Kafka for an Event-Driven System?
It is critical to understand the true meaning of the term “Event” when we use it. A record of “something that happened” is called an Event. When you read or transmit data to Kafka, you use Events. For example, when you visit a website and register or sign up, that signup is an event with some associated data. This may contain all of the information required to sign up. The event can also be thought of as a message including data. Kafka is an Event-Streaming system that handles a continuous stream of events. Event Streaming is the process of capturing data in the form of streams of events in real-time from event sources such as databases, IoT devices, or others.
The Need for Kafka Event-Driven Architecture
The key rationale for leveraging Apache Kafka for an event-driven system is the decoupling of microservices and the development of a Kafka pipeline to connect producers and consumers. Instead of checking for new data, you may just listen to a certain event and take action. Kafka provides a scalable hybrid approach that incorporates both Processing and Messaging.
In an event-driven architecture, an event notification is generated, the system captures what happened, and waits to respond. The application that got the notification might either reply right away or wait till the status changes. This allows for more flexible, scalable, contextual, and responsive digital business systems. This is why this architecture style has been gaining popularity.
Components of an Event-Driven Architecture
Event-driven architecture is mainly employed in real-time systems that require you to act and respond to data changes and events in real-time. This architecture is especially beneficial for IoT systems. Let’s understand the basic components included while building the Event-Driven Architecture.
- Event: When users take action, a major change in the state of an object that happens is referred to as an Event.
- Event Handler: It is a software routine that handles event occurrences.
- Event Loop: The interaction between an Event and the Event Handler is handled by the Event Loop.
- Event Flow Layers: The logical layers that make up the event flow layer are as follows:
- Event Producer: They are responsible for detecting and generating Events.
- Event Consumer: They consume the events generated by the Event Producer.
- Event Channel: It is also called Event Bus. It helps to transfer events from the Event Generator to the Event Consumer.
Patterns of Event-Driven Architecture
Kafka Event Driven Architecture can be implemented in a variety of ways. Here’s a short rundown of some of the most common Event Driven Architectural patterns that you can deploy using Apache Kafka:
1) Event Notification
This is a simple and direct model. A microservice merely broadcasts events to alert other systems of a change in its domain. For example, when a new user is created, a user account service could generate a notification event. Other services can use this information, or it can be disregarded. Notification events often contain little data, resulting in a loosely coupled system with less network traffic dedicated to messaging.
2) Event Carried State Transfer
In this approach, the event’s recipient also obtains the data it needs to take additional action on the required data. For example, the aforementioned user account service may send out an event with a data packet including the new user’s login ID, complete name, hashed password, and other relevant information. This architecture may appeal to developers who are experienced with RESTful interfaces. However, it might result in a lot of data traffic on the network and data duplication in storage, depending on the system’s complexity.
3) Event Sourcing
This model’s purpose is to describe every change of state in a system as an event, with each event being recorded in chronological sequence. As a result, the event stream becomes the system’s primary source of truth. It should be able to “replay” a sequence of events to reproduce the state of a SQL database at a particular moment in time. This approach has a lot of interesting potential, but it may be challenging to get it working right, especially when events involve external system engagement.
As you can see above, Event Driven Architecture can be deployed in many patterns. Some are simpler to implement, and others may be more adaptive to complicated requirements. So, which pattern is appropriate for a specific use case depends on various parameters, including the number of microservices involved, how closely they should be connected, etc.
Steps to Build a Kafka Event-Driven Architecture
In this section, you will understand how to create a Kafka Event Driven Architecture using Python. Before proceeding further, ensure you have started the Kafka and Zookeeper services.
In the given Kafka Event Driven Architecture example, the producer will send out an event to Kafka along with the timestamp. The consumer will receive this event and print the timestamp. So, follow the steps below to get get started:
- Step 1: Set Up the Environment
- Step 2: Configure the Event Producer
- Step 3: Configure the Event Consumer
- Step 4: Execute the Kafka Event Driven Architecture
Step 1: Set Up Python Environment
After you have set up your Docker and started the Kafka and Zookeeper services, create 2 Python projects, namely “producer” and “consumer”. Or you can simply clone the GitHub repository from here.
Download the requirements.txt file from the above link containing the kafka-python==2.0.2 dependency info. Then type the following command in your terminal to set up the Python version in each project.
python3 -m pip install -r requirements.txt
Step 2: Configure the Event Producer
After setting up the Python dependency, let’s proceed to set up the Event Producer. It will produce timestamps and send them to the Event Consumer via Kafka. Create a main.py file in the producer Python project and enter the following code snippet:
from kafka import KafkaProducer
from datetime import datetime
from json import dumps
from time import sleep
producer = KafkaProducer(bootstrap_servers='localhost:9092',
value_serializer=lambda x: dumps(x).encode('utf-8'))
while True:
timestampStr = datetime.now().strftime("%H:%M:%S")
print("Sending: " + timestampStr)
producer.send('timestamp', timestampStr)
sleep(5)
In the above code:
- The bootstrap_servers parameter provides Kafka’s host location. By default, it is localhost:9092.
- value_serializer specifies how the messages will be encoded.
The code will produce a timestamp string in the format of H:M:S.
Step 3: Configure the Event Consumer
After the producer creates timestamps, you need a consumer to listen to them. So, create a main.py file in the consumer Python project and enter the following code snippet:
from kafka import KafkaConsumer
from json import loads
consumer = KafkaConsumer('timestamp',
value_deserializer=lambda x: loads(x.decode('utf-8')))
for message in consumer:
print(message.value)
In the above code:
- value_deserializer specifies how the received messages will be decoded.
The above code will print the timestamp received from the Event Producer.
Step 4: Execute the Kafka Event Driven Architecture
After you have completed the above steps, it’s time that you test your Kafka Event Driven Architecture. Your project structure will basically look like this:
code /
- docker-compose.yml
- producer
-- main.py
- consumer
-- main.py
In your directory code, Kafka and Zookeeper can be started using the following command:
docker-compose up -d
Next, change to the producer directory and activate the environment using the following command:
$ source venv/bin/activate
(venv) $ python3 main.py
Similarly, do it for the consumer directory.
Now, you will see the output of the above Kafka Event Driven Architecture build as shown below. The Event Producer is sending timestamps, and the Event consumer is listening to them and printing them.
Hurray! You have learned the basic steps to start with Kafka Event Driven Architecture. If you wish to generate and print timestamps in JSON format, you can refer to this article.
Benefits & Use Cases of Kafka Event Driven Architecture
- Decoupling: Brokers decouple services to make it easier to add new ones or adjust existing ones. Hence, the long sequences are broken and synced workflows can be decomposed easily.
- Efficient State Transfer: Your system’s dataset includes Events. Streams are a convenient way to distribute datasets such that they can be reassembled and queried within a limited environment. When resources are available, messages are buffered and consumed.
- Faster Operations: It’s easy to merge, join, and enrich data from several services. Joins are quick and straightforward to execute. For high-volume Event processing, event-driven services scale easily.
- Easy Traceability: When there’s a central, immutable, retentive narrative documenting each action as it unfolds in time, it’s simpler to debug the errors and issues.
Use Cases of Kafka Event-Driven Architecture
The below figure depicts some of the common use cases in various industries where the Kafka Event Driven Architecture can be implemented:
Conclusion
In this article, we have provided an in-depth understanding of Kafka Event Driven Architecture. You understood the need for Event Driven Architecture, its components, and commonly deployed patterns in the industry. In addition, you learned the key steps to building a Kafka Event Driven Architecture using Python. At the end of this article, you discovered some of the use cases and benefits of Kafka Event Driven Architecture. Hence, with the increase in popularity of Event Driven Architectures, it is critical to identify the right method to boost your business workflow and simplify complex tasks.
Sign up for a 14-day free trial with Hevo and streamline your data integration. Also, check out Hevo’s pricing page for a better understanding of the plans.
Is Kafka an event-driven architecture?
Apache Kafka, a distributed event streaming platform, is commonly used in event-driven architecture for efficient event-driven communication. EDA patterns support real-time event processing, event sourcing, command query responsibility segregation (CQRS), and pub/sub messaging.
Is Kafka an event or a message?
In Apache Kafka, events are also called messages or records. A Kafka record consists of headers, a key, a value, and a timestamp. Headers contain metadata consisting of string-value pairs, which consumers can read to make decisions based on the metadata
What is the biggest limitation of using event sourcing with Kafka?
First, Kafka only guarantees the order of events within a single topic partition. This means that to guarantee the order of events within a single stream, you’d have to create a partition per stream or a single partition topic per aggregate. Neither of these approaches scales well.
What is the difference between event-driven architecture and event loop?
The event loop is a key component of event-driven architecture, a programming paradigm widely used in graphical user interfaces (GUIs), web development, and other interactive applications. In event-driven programming, the execution flow is determined by events rather than following a linear sequence of instructions.