Apache Kafka is a distributed publish-subscribe messaging platform explicitly designed to handle real-time streaming data. It helps in distributed streaming, pipelining, and replay of data feeds for quick, scalable workflows. In today’s disruptive tech era, raw data needs to be processed, reprocessed, evaluated, and managed in real-time.

Apache Kafka has proved itself as a great asset when it comes to performing message streaming operations. The main architectural ideas of Kafka were created in response to the rising demand for scalable high-throughput infrastructures that can store, analyze, and reprocess streaming data. 

Apart from the publish-subscribe messaging model, Apache Kafka also employs a queueing system to help its customers with enhanced real-time streaming data pipelines and real-time streaming applications.

While the publish-subscribe method is multi-subscriber, it cannot be deployed to distribute work across multiple worker processes since each message is sent to each subscriber. 

In this article, we shall learn more about Apache Kafka Queue.

What is Apache Kafka?

Apache Kafka was created by a team led by Jay Kreps, Jun Rao, and Neha Narkhede at LinkedIn in 2010. The initial goal was to tackle the challenge of low-latency ingestion of massive volumes of event data from the LinkedIn website and infrastructure into a lambda architecture that used Hadoop and real-time event processing technologies. While the transition to “real-time” data processing created a new urgency, there were no solutions available for the same.

There were robust solutions for feeding data into offline batch systems, but they disclosed implementation details to downstream users and employed a push paradigm that might rapidly overload a user. They were also not created with the real-time use case in mind. Kalka’s fault-tolerant architecture is highly scalable and can easily handle billions of events. As a result, Apache Kafka was built to resolve these pain points. At present, it is maintained by Confluent under Apache Software Foundation.

Key Features of Kafka

Here are 3 major features of Apache Kafka:

Scalability

Apache Kafka supports high-throughput sequential writes and separates topics for highly scalable readings and writes. As a result, Kafka makes it possible for several producers and consumers to read and write at the same time. Additionally, subjects partitioned into many partitions can take advantage of storage across different servers, allowing users to reap the benefits of the combined capability of numerous disks.

Reliability

With the Fundamental usage of replication, the Kafka architecture inherently achieves failover. With topics using a defined replication factor, topic partitions are replicated on various Kafka brokers or nodes. When a Kafka broker fails, an ISR takes over the leadership role for its data and continues to provide it effortlessly and without interruption.

Log Compaction

Apache Kafka always keeps the latest known value for each record key, thanks to log compaction. It just preserves the most recent version of a record while deleting previous copies with the same key. This aids in data replication across nodes and serves as a re-syncing tool for failing nodes.

Simplify Apache Kafka Data Analysis with Hevo’s No-code Data Pipeline

Hevo Data is a No-code Data Pipeline that offers a fully managed solution to set up data integration from Apache Kafka and 100+ Data Sources (including 30+ Free Data Sources) and will let you directly load data to a Data Warehouse or the destination of your choice. It will automate your data flow in minutes without writing any line of code. Its fault-tolerant architecture makes sure that your data is secure and consistent. Hevo provides you with a truly efficient and fully automated solution to manage data in real-time and always have analysis-ready data.

Get started with hevo for free

Let’s look at some of the salient features of Hevo:

  • Fully Managed: It requires no management and maintenance as Hevo is a fully automated platform.
  • Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Real-Time: Hevo offers real-time data migration. So, your data is always ready for analysis.
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100’s of sources that can help you scale your data infrastructure as required.
  • Live Monitoring: Advanced monitoring gives you a one-stop view to watch all the activities that occur within a Data Pipeline.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-day free trial!

Understanding the Apache Kafka Architecture

Before you get familiar with the working of a streaming application, you need to understand what qualifies as an event. The event is a unique piece of data that can also be considered a message. For example, when a user registers with the system, the activity triggers an event. This event could include details about the registration, such as the user’s name, email, password, and location.

Kafka Queue - Apache Kafka Architecture
Image Source

Generally, Apache Kafka acts as a broker among processes, applications, and servers. Here, the event (messages) sender is known as a producer, and a receiver is known as a consumer in Apache Kafka, which consumes the message by subscribing to it. Producers can include web servers, applications, IoT devices, monitoring agents, and other data sources that constantly create events.

Messages in Apache Kafka are transmitted in batches, which are referred to as record batches. The producers store messages in memory and deliver them in batches, either after a certain amount of messages have been stored or before a certain latency-bound length of time has passed.

A consumer can request a message from the Kafka broker once the producer pushes messages to the Kafka Server or broker. Messages are saved as topics in each broker. Topics are separated into divisions, and each message is assigned to one of these partitions. You can instruct Apache Kafka to keep copies of the message in multiple partitions on different brokers via replication. With such practices, you would not lose any message if a broker fails, making Kafka fault-tolerant.

Consumers must make queries to brokers indicating their readiness to receive data. A pull-based approach prevents the customer from becoming overburdened with notifications and allows them to fall behind and catch up as needed. Because the consumer will draw all accessible messages after their present position in the log, a pull-based system can also allow aggressive batching of data provided to the consumer.  Furthermore, if the consumer fails or crashes, it may retry and pick up where it left off using the index number.

The Zookeeper is in charge of managing and maintaining the cluster’s brokers. In other words, the zookeeper keeps track of the cluster’s brokers. It determines which brokers have crashed and which brokers have just been added to the cluster, as well as the brokers’ lifetime.

When the Zookeeper receives notification from the producer or consumer about the existence or failure of the broker, the producer and consumer can then make a decision and begin coordinating their work with another broker. Hence, the brokers in a cluster must send messages called Heartbeat messages to the ZooKeeper to keep the ZooKeeper informed that they are alive. 

What are Apache Kafka Queues?

In the Apache Kafka Queueing system, messages are saved in a queue fashion. This allows messages in the queue to be ingested by one or more consumers, but one consumer can only consume each message at a time. As soon as a consumer reads a message, it gets removed from the Apache Kafka Queue.

This is in stark contrast to the publish-subscribe system, where messages are persisted in a topic. Here, consumers can subscribe to one or more topics and consume all the messages present in that topic. Although, you can use the publish-subscribe model to convert a topic into a message queue using application logic. Once consumer groups have read an application logic it deletes the messages from the topic.

Kafka Queue - Apache Kafka Queue
Image Source

One key advantage is that the Apache Kafka Queue helps in segregating the work so that each consumer receives a unique set of data to process. As a result, there is no overlap, allowing the burden to be divided and horizontally scalable. When a new consumer is added to a queue, Kafka switches to share mode and splits the data between the two. This sharing will continue until the number of customers reaches the number of partitions specified for that topic.

A new consumer will not receive any additional messages if the number of consumers exceeds the number of the predefined partitions. This situation arises as a result of the need that each consumer has at least one partition, and if no partition is available, new consumers must wait.

Moreover, queueing is better suited to imperative programming, where messages are similar for consumers in the same domain, versus event-driven programming, where a single event might result in different actions from the consumers’ end, which vary from domain to domain. However, as mentioned above, its shortcoming is that it is not a multi-subscriber; once the consumer reads the data, it is gone.

Creating Apache Kafka Queue

To create an Apache Kafka queue, you need these two topics, viz., 

  • The Queue topic in the Apache Kafka Queue will contain the messages to be processed
  • The Markers topic in the Apache Kafka Queue contains start and finishes markers for each message. These markers help in tracking messages that need to be redelivered.

Now, to start Apache Kafka Queues, you have to create a standard consumer, and then begin reading messages from the most recently committed offset:

  • Read a message from the queue topic to process.
  • Send a start marker with the message offset to the marker’s topic and wait for Apache Kafka to acknowledge the transmission.
  • Commit the offset of the message read from the queue to Apache Kafka.
  • The message may be processed once the marker has been sent and the offset has been committed.
  • When (and only if) the processing is complete, you can send an end marker to the ‘markers topic,’ which contains the message offset once more. There is no need to wait for a sent acknowledgment.

You can also start several Redelivery Tracker components, which will consume the markers topic and redeliver messages when appropriate. Redelivery Tracker in this context is an Apache Kafka application that reads data from the markers queue. And it keeps a list of messages that haven’t been processed yet.

Kafka as a Queue

Kafka Queue: Kafka as a Queue| Hevo Data
Image Source

To create an application that consumes data from Kafka, you can write a consumer (client), point it at a topic (or more than one, but for simplicity, let’s just assume a single one), and consume data from it.

If one consumer is unable to keep up with the rate of production, simply start more instances of your consumer (i.e. scale out horizontally) and the workload will be distributed among them. All of these instances can be grouped together into a single (logical) entity known as the Consumer Group.

For fault-tolerance and scalability, a Kafka topic is divided into partitions. Because each consumer instance in a group processes data from a non-overlapping set of partitions, Consumer Groups enable Kafka to behave like a Queue (within a Kafka topic).

In the diagram above, CG1 and CG2 represent Consumer Groups 1 and 2, which consume from a single Kafka topic with four partitions (P0 to P4).

Kafka as a Topic

The key point here is that all applications require access to the same data (i.e. from the same Kafka topic). In the diagram below, Consumer Group A and Consumer Group B are two separate applications that will both receive all of the data from a topic.

Kafka Queue: Kafka as a Topic| Hevo Data
Image Source

Conclusion

In this article, we learned about the Apache Kafka architecture and how it uses an Apache Kafka queueing messaging system. The Apache Kafka Queueing system proves useful when you need messages to be deleted after being viewed by consumer groups. It also enables you to distribute messages across several consumer instances, allowing it to be highly scalable.

Extracting complex data from a diverse set of data sources to perform insightful analysis can be difficult, which is where Hevo comes in! Hevo Data provides a faster way to move data from databases or SaaS applications such as Apache Kafka into your Data Warehouse or a destination of your choice so that it can be visualized in a BI tool. Hevo is completely automated, so no coding is required.

visit our website to explore hevo

Hevo Data with its strong integration with 100+ Sources & BI tools such as Apache Kafka, allows you to not only export data from sources & load data in the destinations, but also transform & enrich your data, & make it analysis-ready so that you can focus only on your key business needs and perform insightful analysis using BI tools.

Give Hevo Data a try and sign up for a 14-day free trial today. Hevo offers plans & pricing for different use cases and business needs, check them out!

Share your experience of understanding Apache Kafka Queues for Messaging in the comments section below.

Preetipadma Khandavilli
Technical Content Writer, Hevo Data

Preetipadma is a dedicated technical content writer specializing in the data industry. With a keen eye for detail and strong problem-solving skills, she expertly crafts informative and engaging content on data science. Her ability to simplify complex concepts and her passion for technology makes her an invaluable resource for readers seeking to deepen their understanding of data integration, analysis, and emerging trends in the field.

No-code Data Pipeline For Kafka