Debezium SQL Server Connector is one of the most popularly used connectors for keeping track of the changes in Databases. Since SQL is widely used for Databases, leveraging the Debezium SQL Connector becomes seamless for organizations to build superior real-time Applications. To quickly identify the changes in the Databases, the connector implements snapshotting and sends the events – row-level changes – to Kafka topics effectively. Applications can then react based on the use cases for every real-time change in Source Databases.

In this article, you will learn about Debezium SQL Server Connector and how to implement it with databases to capture real-time changes.

Prerequisites

  • Understanding of Event Streams

What are Debezium Connectors?

Debezium
Image Source

Debezium Connectors are tools that follow the Change Data Capture patterns (CDC) to keep track of real-time changes in Databases. Debezium Connectors are event streaming connectors that create events from changes in Databases and then store them in different Destinations. Unlit Debezium Database Administrators of the Database Management System used to record changes and save them to an additional source file. But, they could not keep track of real-time changes with the rising Digital Applications. Consequently, today several companies, including Amazon and Flipkart, use Debezium Connectors to simplify end-to-end business processes. 

You can pick from a wide range of Debezium Connectors for MySQL, SQL, PostgreSQL, Oracle, etc., and build robust solutions that eliminate manual dependencies.

What is SQL Server?

Sql server logo
Image Source 

SQL stands for a Structured Query Language and is an Open-Source Relational Database Management System. The Relational Database System is a Database that consists of Databases that are stored in rows and columns. In short, it follows a structure or a format in Databases. With SQL, you can carry out several operations like update, delete, and insert to quickly manage a colossal amount of information effectively. One of the best advantages of SQL is that you can write a single query to access multiple tables of record. SQL also supports programming languages like Java, C++, Ruby, Visual Basic, and more, allowing interoperability to work across applications. 

Simplify Data Analysis with Hevo’s No-code Data Pipeline!

Hevo Data, a No-code Data Pipeline, helps you directly transfer data from 100+ data sources to Data Warehouses, BI tools, or a destination of your choice in a completely hassle-free & automated manner.

Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready format without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

Get Started with Hevo for Free

Check out why Hevo is the Best:

  1. Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  2. Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  3. Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  4. Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  5. Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  6. Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, E-Mail, and support calls.
  7. Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!

Getting started with Debezium SQL Server Connector

In Debezium SQL Server Connectors, you can record the row-level changes in the Schema of the Databases. Besides, when the Debezium Connector is connected to the SQL Server, it starts to take continuous snapshots of the initial Schemas of Databases. When the snapshots get completed, the connector tracks the changes committed to the SQL Server. Further, it will generate the insert, delete, and update events according to the changes, and all such events of one table are stored in a Kafka topic. Such information in the topics is then accessed by various applications separately.

Snapshotting is how Debezium creates the baseline of the current Databases and streams it to Kafka. It does not entirely store the history of the Database changes.

The snapshots consist of the following steps:

  • It has tables to be captured.
  • It also has the lock on each of the tables that are monitored. It is used to verify that no Structural Changes are carried out in the tables. You can use snapshot.isolation.mode option to know the level of the lock.
  • It reads the Maximum Log Sequence number position in the Server Transaction Tag.
  • It can release the lock used earlier in the previous step.
  • It then scans all the Schemas and checks whether they are valid according to the LSN Position.
  • It generates a reading event for each row and then writes it to the Kafka topic.
  • Finally, it records the completion of snapshots in the Connector Offsets.

A) Deploy the Debezium SQL Server Connector

You will need Kafka Connect to stream the data between Kafka and another system in a reliable and scalable way. To move the data in between such systems you can leverage the Connectors.

There are mainly two types of connectors:

  • Source Connectors: Source Connectors know how to interact with the source system and send records to Kafka.
  • Sink connectors: Sink Connectors take records from Kafka topics to other systems. 

You can also use the Confluent SQL Server Connector to get data out from some SQL server tables and then stream it to Kafka connect. But since you want to rush the real-time changes in the Databases, you must use a Debezium SQL Server Connector.

The steps involved in setting up Debezium SQL Server Connector are:

Step 1: Create a Database in SQL Server. Use the following set of queries.

Debezium SQL Server Connector: A) Step 1
Image Source

Step 2: You need some Docker images to start Kafka and Kafka Connect. You can get it from Docker images. Run the following commands.

Debezium SQL Server Connector: A) Step 2a
Image Source

From the above command, mkdir is used to make a directory, and cp-all-in-one repository files are cloned. Hence, under the Kafka directory cp-all-in-one file is created.

Debezium SQL Server Connector: A) Step 2b
Image Source

From above, the cp-all-in-one has some sub-directories. In the same directory, you can see a docker-compose.yml file. When this file is opened in the editor, you can find the number of images deployed as service when the docker-compose command is used.

Debezium SQL Server Connector: A) Step 2c
Image Source

From above, you can see that it is a yml file

  • image: it is the Docker image on which the Docker service is built.
  • CONNECT_REST_PORT: it is the port to access the service.
  • CONNECT_PLUGIN_PATH: it is the path from where the connectors come in.
  • cp-server-connect-datagen: it is the image that contains some base tools and connectors for generating the data. 

Step 3: Run Kafka and Connect using the below command. 

cd: where the docker-compose.yml file is and then run docker-compose up -d.

In the below image, you can see how Docker pulls images and services.

Debezium SQL Server Connector: A) Step 3
Image Source

Step 4: Go through the Confluent Control Center to check the created Kafka cluster.

Debezium SQL Server Connector: A) Step 4
Image Source

From above, you can see that the Kafka cluster is connected to the Confluent Control Center through port number 9021, where Docker runs.

Step 5: Use the following command to check the list of the default connectors by the Kafka REST API. It consists of the connector-plugins and port no: 8080.

GET http://127.0.0.1:8083/connector-plugins

To call REST API, you can use any one of the Postman or curl tools. In this tutorial, you will use Postman. The Default Connectors are as follows.

Debezium SQL Server Connector: A) Step 5
Image Source

Step 6: You need to install the Debezium SQL Server Connectors as it is not available in the default list. You need .jar file and also CONNECT_PLUGIN_PATH to know where the service loads connect from. 

Let’s install the connectors when Kafka is running in the Docker containers. Follow the below steps:

  • Download the Debezium SQL Connector you want to install.
  • Then, you have to spin up the Kafka cluster. Spin up is the cluster’s speed to carry out the read or write operation.
  • Use docker cp to copy the connector to the connect container.
  • Use docker exec to get into the bash shell of the connect container.
  • Now, un-tar the Debezium SQL Connector file to the plugin load path.
  • Then, come out of the container and use docker commit to commit the changes to a new image name. 
  • Stop running the Kafka cluster. You can do it with docker-compose down. 
  • Use the image in the Docker compose file because it will consist of components used to create a container on the Docker platform.

You can download the SQL server source connector from Confluent.io.

Step 7: To install the Connector, you can use the below command.

confluent-hub install debezium/debezium-connector-sqlserver:1.6.0

Add configuration to the docker-compose.yml file using the following commands.

Debezium SQL Server Connector: A) Step 7
Image Source

Step 8: Use the Postman command to retrieve the connectors that are installed. You can see that through the following image.

Debezium SQL Server Connector: A) Step 8
Image Source

B) Retrieve data from the Kafka Connect through the SQL Connectors

Step 1: You need to enable the SQL server first and use the Database and table created below.

Debezium SQL Server Connector: B) Step 1
Image Source

Step 2: Use the POST command of Postman to create the instance of the connector. 

Debezium SQL Server Connector: B) Step 2
Image Source

Step 3: Look in the Confluent Control Center for the Kafka topic after sending the POST Request. It should consist of the following output.

Debezium SQL Server Connector: B) Step 3
Image Source

Step 4: Now, when you create any changes in the Database or the tables, it should reflect in the Kafka topic. Type the following commands.

Debezium SQL Server Connector: B) Step 4
Image Source

Step 5: Now, check the topics page. You can see the following changes.

Debezium SQL Server Connector: B) Step 5
Image Source

You have successfully installed Debezium SQL Server and checked their status in Kafka Connect.

Conclusion

In this tutorial, the Debezium SQL Server Connectors sink and source are explained. The configuration of the source connectors, like installing, monitoring, and deploying is described in detail. Further, the real-time changes are captured through the connectors in the connector. If you want to export data from various sources like SQL Server into your desired Database/destination, then Hevo Data is the right choice for you! 

Visit our Website to Explore Hevo

Hevo Data provides its users with a simpler platform for integrating data from 150+ sources for Analysis. It is a No-code Data Pipeline that can help you combine data from multiple sources. You can use it to transfer data from multiple data sources into your Data Warehouses, Database, or a destination of your choice. It provides a consistent and reliable solution to managing data in real-time, ensuring that you always have Analysis-ready data in your desired destination.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also look at the unbeatable pricing that will help you choose the right plan for your business needs.

Share your experience of learning about Debezium SQL Server Connectors! Let us know in the comments section below!

Manjiri Gaikwad
Freelance Technical Content Writer, Hevo Data

Manjiri loves data science and produces insightful content on AI, ML, and data science. She applies her flair for writing for simplifying the complexities of data integration and analysis for solving problems faced by data professionals businesses in the data industry.

No-code Data Pipeline For Your Data Warehouse