Heroku Kafka Deployment: 3 Easy Steps

By: Published: February 16, 2022

Heroku Kafka | Hevo Data

Heroku is an incredibly popular platform as a service (PaaS) provider that supports multiple programming languages. Developed back in 2007, Heroku was one of the first cloud platforms in the world. It originally offered support for only Ruby, but over time, the platform has expanded its support for other programming languages, including Clojure, Python, Java, Node.js, Scala, Go, and PHP. 

That is one of the main reasons why Heroku has become such a popular choice for developers today. More importantly, deployment for Heroku Kafka is also available. 

Table Of Contents

What is Kafka?

For those who don’t know, Apache Kafka is a distributed commit log that’s designed for rapid communication between producers and consumers. 

Think of it as a messaging platform that allows producers to work on distributed applications that can handle endless amounts of transactions or events. From user activity streams to log events to telemetry data from specific devices, there are tons of events that are recorded. 

By using Heroku Kafka, you can easily get a lot of benefits. Kafka is a popular choice for developers as it lets them redefine the connections between operations, time, and data in specific applications. Apache Kafka simply takes transactional data from tables and then creates records of all the different change events that took place in your application.

This allows for incredibly detailed auditing and simulation, and above all, makes data recovery a breeze. But, before we go into detail about how to deploy Heroku Kafka, here are a few reasons why you would want to do that in the first place. 

What are the Key Features of Kafka?

Apache Kafka is extremely popular due to its characteristics that ensure uptime, make scaling simple, and allow it to manage large volumes, among other features. Let’s take a glance at some of the robust features it offers:

  • Scalable: Kafka’s partitioned log model distributes data over numerous servers, allowing it to scale beyond what a single server can handle.
  • Fast: Kafka decouples data streams, resulting in exceptionally low latency and high speed.
  • Durable: The data is written to a disc and partitions are dispersed and duplicated across several servers. This helps to safeguard data from server failure, making it fault-tolerant and durable.
  • Fault-Tolerant: The Kafka cluster can cope with master and database failures. It has the ability to restart the server on its own.
  • Extensibility: Since Kafka’s prominence in recent years, several other software has developed connectors. This allows for the quick installation of additional features, such as integrating into other applications. Check out how you can integrate Kafka with Redshift and Salesforce.
  • Log Aggregation: Since a modern system is often dispersed, data logging from many system components must be centralized to a single location. By centralizing data from all sources, regardless of form or volume, Kafka frequently serves as a single source of truth.
  • Stream Processing: Kafka’s fundamental skill is doing real-time calculations on Event Streams. Kafka ingests, stores, and analyses stream of data as they are created, at any scale, from real-time data processing to dataflow programming.
  • Metrics and Monitoring: Kafka is frequently used to track operational data. This entails compiling data from scattered apps into centralized feeds with real-time metrics. To read more about how you can analyze your data in Kafka, refer to Real-time Reporting with Kafka Analytics.

What are the Components of Kafka?

Clients

Clients allow producers (publishers) and consumers (subscribers) to be created in microservices and APIs. Clients exist for a vast variety of programming languages.

Servers

Servers can be Kafka Connect or brokers. Brokers are identified as the storage layer and Kafka Connect is identified as a tool for data streaming between Apache Kafka and other systems such as databases, APIs, or other Kafka clusters.

Zookeeper

Kafka leverages Zookeeper to manage the cluster. Zookeepers can easily help coordinate the structure of clusters/brokers.

What is Heroku?

Heroku is a cloud service that can be defined as a container-based Cloud data Platform-as-a-Service. Heroku’s popularity amongst Developers has increased rapidly in recent times as it is fully managed and simple to use, therefore making it easier for developers to deploy, manage, and scale their applications on the platform to reach a target audience.

Since Heroku is fully managed, you do not have to worry about the maintenance of servers, hardware, or infrastructure, rather, you can focus on the building, management, and deployment of your apps using modern tools, workflows, polyglot, and other provided services by Heroku to increase your productivity and ultimately create great high-performance applications that will be accepted in the market space. Heroku supports different languages such as Node.Js, Ruby, Java, PHP, Python, Go, Scala, Clojure, and any language that runs on Linux with Heroku via a third-party build pack for developers to make their choice from. 

Finally, because of Heroku’s simple setup, it is an ideal tool for businesses with limited budgets or individuals or organizations that are just getting to try out the various opportunities found in the Cloud

What are the Key Features of Heroku?

  • Support for Modern Open Source Languages: Ability to run multiple languages from the same platform, including Node, Ruby, Java, Clojure, Scala, Go, Python, and PHP—choose the best technologies for your application.
  • Smart Containers, Elastic Runtime: Your apps run in dynos, smart containers that are part of an elastic runtime platform that includes orchestration, load balancing, security, and logging, among other features.
  • Simple Horizontal and Vertical Scalability: Heroku Enterprise hosts some of the world’s busiest and most demanding applications. With no downtime, easily scale apps with a single click.
  • Trusted Application Operations: Heroku’s global operations and security team is on call 24 hours a day, seven days a week, allowing development teams to concentrate on creating more engaging user experiences.
  • Built for Continuous Integration and Delivery: Using an API or Git, GitHub, or Docker to deploy. For consistent and automated application delivery, connect to the most popular CI systems and servers.
  • Leading Platform Tools and Services Ecosystem: From the Heroku Elements marketplace, you can create apps with Add-ons, customize language stacks with Buildpacks, and jumpstart projects with Buttons.

Why Deploy Apache Kafka on Heroku?

There are several reasons why you would want to deploy Apache Kafka on Heroku. Here are a few:

A Streamlined Developer Experience

One of the main reasons why so many developers prefer Apache Kafka is because web tooling is incredibly easy. You can configure, provision, and operate Kafka very easily. It lets you add topics, monitor important metrics, and manage logs directly through the CLI or from the Heroku dashboard.

Seamless Integration

If you want horizontal or vertical integration with different apps, you can easily run Heroku apps for scalability. With Config Vars, you can also connect directly to your Kafka cluster, thus allowing you to focus more on streamlining the core logic. 

Regulate and Manage Data Streams Securely

Kafka lets you securely and safely manage all data streams, including both PII and PHIU streams, to easily build real-time apps that are completely HIPAA compliant. This is ideal for when you’re developing apps for regulated industries, such as healthcare.

If you’re going to build data-intensive apps that require microservices coordination, using Heroku Kafka is a great idea. Heroku Kafka gives you a variety of features, including but not limited to:

  • Greater Resiliency and Upgrades: Kafka on Heroku uses self-healing and automated recovery, so in case a broker is unavailable, the service will automatically replace failed elements to heal the cluster. 
  • Automated Operations: Kafka is a distributed system, but when you use it on Heroku, it removes all operational problems, automating things like provisioning, availability, and management. You can add Kafka to an application by just adding a command line. 
  • Simple Configuration: Kafka on Heroku offers a series of preconfigured plans that are optimized for basic use cases. They are available in both Private Spaces on Heroku as well as in Common Runtime. 

Simplify Kafka’s ETL & Data Analysis with Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline, helps load data from any data source such as Databases, SaaS applications, Cloud Storage, SDK,s, and Streaming Services and simplifies the ETL process. It supports 100+ Data Sources such as Kafka,  including 40+ Free Sources. It is a 3-step process by just selecting the data source, providing valid credentials, and choosing the destination. 

Hevo loads the data onto the desired Data Warehouse/destination in real-time and enriches the data and transforms it into an analysis-ready form without having to write a single line of code. Its completely automated pipeline, fault-tolerant, and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. The solutions provided are consistent and work with different BI tools as well.

GET STARTED WITH HEVO FOR FREE

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled securely and consistently with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.

Simplify your Data Analysis with Hevo today! SIGN UP HERE FOR A 14-DAY FREE TRIAL!

How to Deploy Apache Kafka on Heroku?

For this guide, we are assuming that you have already added Apache Kafka as an add-on to your app when you’re logged into Kafka. It should appear under the Resources section before you get started, just like Heroku Postgres. 

Otherwise, you can Install Apache Heroku Kafka. Simply follow the link, and click on the button on the right. 

Once you have provisioned Heroku Kafka, the next step is to click on the add-on name. This will open the console in a new tab, and you can then add a topic. 

Heroku Kafka Deployment: Adding Topics

When you click on Add Topic, it’ll ask you to add a name for it. You can also define the Partitions field, and decide whether you want to stick with default values for the rest of the settings. 

Once your topic is created, you need to define a consumer group. To do that, you’ll have to install the Heroku CLI and then execute an add-on command. Since Kafka is already provisioned, there’s no reason to do it again from the CLI. To create a consumer group now, just run the following command:

heroku kafka:consumer-groups:create <example group> -a <example app>

Now, you can run view the list of groups that are available on your Kafka deployment by running this command:

heroku kafka:consumer-groups -a <example app>

Once you’ve set this up, the next step is to gather all certificates and SSL URLs. 

You can create and manage as many topics as you like. Use the programming language that you’re familiar with to run the executable codes. 

Heroku Kafka Deployment: Connecting to a Kafka Cluster

All connections to Kafka require SSL encryption and authentication. If you’ve provisioned a cluster in a Private Space, you can also connect via plaintext. For clusters in Shield Spaces, you can’t use plaintext connections.

When you connect over SSL, all of your traffic shall be encrypted and authenticated with the help of an SSL certificate. Here are the environment variables you should use for connecting over SSL:

  • KAFKA_URL: You can add a list of SSL URLs, separated by a comma, for the Kafka brokers that constitute the cluster. 
  • KAFKA_CLIENT_CERT: This is a necessary client certificate (should be available in PEM format) for authenticating clients against the Kafka broker.
  • KAFKA_TRUSTED_CERT: This is the Kafka brokers’ SSL certificate, required for checking whether the connection is being established with the right servers. 
  • KAFKA_CLIENT_CERT_KEY: This is a necessary client certificate key (also in PEM format) for authenticating clients against the Kafka broker.

Heroku Kafka Deployment: Deploy Your Code to Your App

Once you create the topic, run the console and then deploy your code to the Kafka-enabled app that you just set up. The code should run smoothly, and once you’re done, just run the following code to see how it works:

heroku open

Then, your event flows will start appearing in the Heroku dashboard, allowing you to easily gain an understanding of your data. 

Heroku Kafka Plan and Pricing

The platform’s runtimes are currently available in a variety of plans. Dedicated clusters, optimized for high throughput and volume, are now available. You’ll keep expanding this set of plans to meet a wider range of requirements, and you’ll make evented architectures available to applications at all stages of development.

Common Runtime Plans:

Plan NameCapacityMax RetentionvCPURAMClusters
standard-0150GB2 weeks416GB3 kafka, 5 zookeeper
standard-1300GB2 weeks416GB3 kafka, 5 zookeeper
standard-2900GB2 weeks416GB3 kafka, 5 zookeeper
extended-0400GB6 weeks416GB8 kafka, 5 zookeeper
extended-1800GB6 weeks416GB8 kafka, 5 zookeeper
extended-22400GB6 weeks416GB8 kafka, 5 zookeeper

Private Spaces Plans:

Plan NameCapacityMax RetentionvCPURAMClusters
private-standard-0150GB2 weeks416GB3 kafka, 5 zookeeper
private-standard-1300GB2 weeks416GB3 kafka, 5 zookeeper
private-standard-2900GB2 weeks416GB3 kafka, 5 zookeeper
private-extended-0400GB6 weeks416GB8 kafka, 5 zookeeper
private-extended-1800GB6 weeks416GB8 kafka, 5 zookeeper
private-extended-22400GB6 weeks416GB8 kafka, 5 zookeeper

Shield Spaces Plans:

Plan NameCapacityMax RetentionvCPURAMClusters
shield-standard-0150GB2 weeks416GB3 kafka, 5 zookeeper
shield-standard-1300GB2 weeks416GB3 kafka, 5 zookeeper
shield-standard-2900GB2 weeks416GB3 kafka, 5 zookeeper
shield-extended-0400GB6 weeks416GB8 kafka, 5 zookeeper
shield-extended-1800GB6 weeks416GB8 kafka, 5 zookeeper
shield-extended-22400GB6 weeks416GB8 kafka, 5 zookeeper

Heroku Kafka Maintenance Queries

  • Why is this maintenance happening?

Apache Kafka on Heroku is a managed Kafka service, with one of the most important benefits being the provision of security and feature updates. As part of its Apache Kafka on Heroku offering, Heroku monitors for and patches security vulnerabilities proactively.

  • How can I protect my app against downtime and errors during maintenance?

Please use the guidelines for Robust Usage. This will protect you not only from maintenance errors but also from Kafka node outages, which are uncommon but still possible.

  • How long will maintenance take?

The amount of time it takes to perform maintenance is determined by the cluster’s size and load. Kafka nodes are added before the old node is removed during maintenance. While this reduces the impact of clusters, it takes time. Maintenance usually lasts a few days. They can last up to a week in larger clusters.

As partitions move between brokers during Kafka maintenance, Kafka clients may see small amounts of errors.

  • How do I find the maintenance status?

Check the status of your kafka cluster with the heroku kafka:info command. The maintenance status lasts for the duration of the maintenance.

=== KAFKA_URL
Plan:       heroku-kafka:standard-0
Status:     undergoing maintenance
...
  • How do I resolve NotLeaderForPartitionException errors?

Restarting your consumer and producer dynos is the quickest and easiest way to recover from this error.

Conclusion

Kafka is one of the best transports for building data pipelines to transform stream data and then gain access to key metrics. You can create pipelines and then use Heroku to gain access to all event flows straight from the dashboard. Heroku Kafka lets you easily accept greater volumes of inbound events, giving you granular details about any events. 

You can also remove or add downstream services and take full advantage of the durability that Kafka has to offer to ensure that no events are lost in case there is a disconnection. It’s a fantastic choice for those who want to gain full access to all events and examine their data in granular detail.

However, as a Developer, extracting complex data from a diverse set of data sources like Databases, CRMs, Project management Tools, Streaming Services, and Marketing Platforms to your Kafka Database can seem to be quite challenging. If you are from non-technical background or are new in the game of data warehouse and analytics, Hevo Data can help!

Visit our Website to Explore Hevo

Hevo Data will automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. This platform allows you to transfer data from 100+ multiple sources to Cloud-based Data Warehouses like Snowflake, Google BigQuery, Amazon Redshift, etc. It will provide you with a hassle-free experience and make your work life much easier.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Najam Ahmed
Freelance Technical Content Writer, Hevo Data

Skilled in freelance writing within the data industry, Najam is passionate about simplifying the complexities of data integration and data analysis through informative content for those delving deeper into these subjects.

No-Code Data Pipeline for Kafka