Real-Time Data in the form of events from sources such as Databases, sensors, mobile devices, cloud services, and software applications is crucial for business growth.

  • A Community Distributed Message Streaming Platform like Apache Kafka helps ensure a continuous flow of data so that businesses can take strategic decisions in Real-Time.
  • Kafka is one such unified solution that offers services to read, write, store & process thousands of messages per second.
  • You can Install Kafka on Mac, Windows, Linux Operating Systems as well as deploy it in the cloud. Apache Kafka provides several software versions on its website to Install Kafka on Mac.
  • You can either Manually download the Kafka package and Install Kafka on Mac or use the single-line Homebrew commands on Mac.  
Maximize Your Kafka Data Integration with Hevo

Connect and manage data from Kafka using Hevo’s seamless integration solutions. Sign up for a free trial and discover the benefits of efficient data handling.

Get Started with Hevo for Free

How to Install Kafka on Mac?

To install Kafka on Mac, you can follow the easy steps given below:

Step 1: Install Java on Mac Manually or via HomeBrew

  • Before you begin to Install Kafka on Mac you should ensure that Java version 8+ is installed on your system.
  • You can check that by running the java -version command on the terminal. If it is installed, you will get the following output:
Install Kafka on Mac - java version output
Image Source

If you have not installed Java yet, then follow the steps given below to install Java on Mac:

  • Step 1:  Visit the Official Java Downloads Page and click on the download link for the .dmg file.
  • Step 2: Double click the downloaded file and follow the installation instructions to install Java on Mac. After successful installation, you may delete the .dmg file for the setup.

If you have Homebrew installed on Mac, you install Java on Mac via these 2 simple commands:

$ brew tap caskroom/cask
$ brew cask install java

Step 2: Download & Install Kafka on Mac Manually or via HomeBrew

To Install Kafka on Mac Manually, follow these steps:

  • Step 1: Visit the Official Apache Kafka Downloads page and click on the version you want to download under the Binary Downloads section.
  • Step 2: After you click on the version link, Apache will provide a mirror link for downloading Kafka. Save the file to your desired location.
  • Step 3: Open up the terminal and navigate to the folder where the downloaded Kafka file is present. You can now extract the contents of the file from the downloaded ‘tar’ File by the following commands:
$ tar -xzf kafka_2.13-3.0.0.tgz 
$ cd kafka_2.13-3.0.0  

Following these instructions completes the manual approach to Manually Download & Install Kafka on Mac. If you have Homebrew installed on your Mac, you can Install Kafka on Mac in a single step by running the following command:

$ brew install kafka

Step 3: Run ZooKeeper & Kafka

Currently, it is necessary to Run ZooKeeper alongside Kafka as it is responsible for Kafka’s Cluster Management. You don’t need to install ZooKeeper separately as it comes with the Apache Kafka Installation package. Though, as reported by Apache, Zookeeper will not be required by Kafka in the future versions.

  • Step 1: From the root folder of Apache Kafka, you can run ZooKeeper by executing the following command:
$ bin/zookeeper-server-start.sh config/zookeeper.properties

On successful execution of this command, you will see an output as follows:

Install Kafka on Mac - ZooKeeper Starting Output
Image Source
  • Step 2: Now, open a new terminal window & run the following command from the root of Apache Kafka to start the Kafka environment:
$ bin/kafka-server-start.sh config/server.properties

After this, you will see the output below confirming the successful run of the Kafka broker.

Install Kafka on Mac - Kafka Starting Output
Image Source

Similarly, on the Zookeeper terminal, a message for a new session will be displayed. This completes the process to Install Kafka on Mac.

Install Kafka on Mac - ZooKeeper New Session Output
Image Source 

Step 4: Setup the PATH Environment Variable

You can easily access the Kafka binaries by editing your PATH environment variable. To do this, run the following command on your system (for example, ~/ .zshrc if you use zshrc):

PATH="$PATH:/Users/stephanemaarek/kafka_2.13-3.0.0/bin"

Once you have done this, you can run kafka commands without needing to prefix them.

After reloading your terminal, you can run the following command from any directory.

Kafka-topics.sh
Integrate Kafka to Snowflake
Integrate Kafka to BigQuery

Step 5 (optional): Changing the Zookeeper and Kafka Storage Directory

It is recommended to change the storage directory for Zookeeper and Kafka to avoid server connection issues.

For Zookeeper:

  • Change the zookeeper.properties file at ~/kafka_2.13-3.0.0/config/zookeeper.properties to another directory per your requirement.
  • Launch Zookeeper with the updated zookeeper.properties file.
  • Alternatively, you can create a copy of the zookeeper.properties file on your device, edit that file, and reference it on the Zookeeper start command.

For Kafka:

  • Change the server.properties file at ~/kafka_2.13-3.0.0/config/server.properties to your required directory.
  • Launch Zookeeper using the updated server.properties file.
  • Alternatively, you can create a copy of the server.properties file on your device, edit that file, and reference it on the Zookeeper start command.

Creating a Kafka Topic

A topic is a feed name or category that you can use to publish records. Kafka topics are always multi-subscriber; a topic may have zero, one, or several subscribers to the data published to it.

You can use the following example to create a sample topic called “my-topic.” This example uses a single partition and a replication factor of 1.

./bin/kafka-topics.sh --create \

 --bootstrap-server localhost:9092 \

 --replication-factor 1 \

 --partitions 1 \

 --topic my-topic
  • Runs the Kafka script kafka-topics.sh to create a new topic.
  • --bootstrap-server localhost:9092 specifies the Kafka server to connect to, here running locally on port 9092.
  • --replication-factor 1 sets the topic’s replication factor to 1, meaning it will have one copy stored.
  • --partitions 1 creates one partition for the topic, defining how data is divided.
  • --topic my-topic names the new topic as “my-topic.”

To publish a new message on the topic, run the following command and type your message on the console that appears.

./bin/kafka-console-producer.sh \

 --broker-list localhost:9092  \

 --topic my-topic
  • Runs the Kafka script kafka-console-producer.sh to start a console-based producer.
  • --broker-list localhost:9092 specifies the Kafka broker to send messages to, here running on localhost at port 9092.
  • --topic my-topic designates “my-topic” as the topic to which messages will be sent.
  • Allows the user to type messages in the console, which are then published to the specified Kafka topic.

Similarly, you can use the following command to consume a message.

./bin/kafka-console-consumer.sh \

 --bootstrap-server localhost:9092 \

 --topic my-topic \

 --from-beginning
  • Runs the Kafka script kafka-console-consumer.sh to start a console-based consumer.
  • --bootstrap-server localhost:9092 connects the consumer to the Kafka broker on localhost at port 9092.
  • --topic my-topic specifies “my-topic” as the topic from which messages will be consumed.
  • --from-beginning makes the consumer read all messages from the start of the topic, not just new messages.
  • Displays the consumed messages in the console for easy viewing.

What is Apache Kafka?

Apache Kafka is a Popular Open-Source Distributed Stream Data Ingesting & Processing Platform. Providing an end-to-end solution to its users, Kafka can efficiently read & write streams of events in real-time with constant import/export of your data from other data systems. 

Its Reliability & Durability allows you to store streams of data securely for as long as you want. With its Best-in-Class performance, Low latency, Fault Tolerance, and High Throughput, Kafka can handle & process thousands of messages per second in Real-Time. 

Launched as an Open Source Messaging Queue System by LinkedIn in 2011, Kafka has now evolved into a Full-Fledged Event Streaming Platform.  It is an excellent tool for building Real-Time Streaming Data Pipelines and Applications that adapt to the Data Streams. You can easily Install Kafka on Mac, Windows & Linux OS. Adding to its Flexibility, Kafka works for both Online & Offline Message Consumption.

Key Features of Apache Kafka

Over the Years, Apache Kafka has grown into a one-stop solution for all the Stream-Processing needs. Some of its eye-catching features are: 

  • Scalability: Owning to its unique architecture, Kafka can easily handle scaling of all of its 4 elements i.e. event producers, event processors, event consumers, and event connectors without any downtime.
  • Time-Based Data Retention: Kafka offers a simple yet effective approach towards Fault Tolerance. It persistently writes and replicates all your data to the disk. You can also set the Retention Limit and recall the stored data based on those periods.  

Integration Support: Catering to everyone’s needs, Kafka provides Integration points via Connector API, allowing you to expand and grow. You can build integrations with third-party solutions, other messaging systems, and legacy applications. You achieve this effortlessly via the pre-built connectors of Cloud-Based ETL Tools like Hevo Data

Conclusion

In this article, you have learned how to effectively Install Kafka on Mac both Manually and via Homebrew. To Install Kafka on Mac and successfully run it, you need to ensure that the Java 8+ version is installed beforehand. Kafka provides an Open-Source environment for collecting, processing, storing, and analyzing data at scale in Real-Time. Due to its Flexibility, Reliability & Best-in-Class Performance, Kafka has several use cases in organizations globally such as Building Data Pipelines, Leveraging Real-Time Data Streams, enabling Operational Metrics, and Data Integration across countless sources.

Learn how to install Kafka on other systems with these essential reads:

For a more In-depth and Holitisic Analysis of Business Performance and Financial Health, it is essential to consolidate data from Apache Kafka and all the other applications used across your business. However, to extract this complex data with everchanging Data Connectors, you would require to invest a section of your Engineering Bandwidth to Integrate, Clean, Transform & Load data to your Data Warehouse or a destination of your choice. On the other hand, a more Effortless & Economical choice is exploring a Cloud-Based ETL Tool like Hevo Data.

FAQ

How do I install Kafka on my Mac?

You can install Apache Kafka on your Mac by downloading it from the official website, extracting the files, and configuring the server.properties file. Alternatively, you can use Homebrew for easier installation.

How to install Kafka on Mac using Brew?

Open your terminal and run the following commands:
Install kafka via Brew:
brew install kafka
Start Kafka:
brew services start kafka
Start Zookeeper (if needed):
brew services start zookeeper

How to install Confluent Kafka on Mac?

Use Homebrew to install Confluent Kafka:
Install Confluent:
brew install confluent-platform
Start the Kafka services:
confluent local services start
Verify installation with confluent local services status.

Sanchit Agarwal
Research Analyst, Hevo Data

Sanchit Agarwal is an Engineer turned Data Analyst with a passion for data, software architecture and AI. He leverages his diverse technical background and 2+ years of experience to write content. He has penned over 200 articles on data integration and infrastructures, driven by a desire to empower data practitioners with practical solutions for their everyday challenges.