Debezium SQL Server Connector is one of the most popularly used connectors for keeping track of the changes in Databases. Since SQL is widely used for Databases, leveraging the Debezium SQL Connector becomes seamless for organizations to build superior real-time Applications. To quickly identify the changes in the Databases, the connector implements snapshotting and sends the events – row-level changes – to Kafka topics effectively. Applications can then react based on the use cases for every real-time change in Source Databases.

In this article, you will learn about Debezium SQL Server Connector and how to implement it with databases to capture real-time changes.

Prerequisites

  • Understanding of Event Streams

What are Debezium Connectors?

Debezium
Image Source

Debezium Connectors are tools that follow the Change Data Capture patterns (CDC) to keep track of real-time changes in Databases. Debezium Connectors are event streaming connectors that create events from changes in Databases and then store them in different Destinations. Unlit Debezium Database Administrators of the Database Management System used to record changes and save them to an additional source file. But, they could not keep track of real-time changes with the rising Digital Applications. Consequently, today several companies, including Amazon and Flipkart, use Debezium Connectors to simplify end-to-end business processes. 

You can pick from a wide range of Debezium Connectors for MySQL, SQL, PostgreSQL, Oracle, etc., and build robust solutions that eliminate manual dependencies.

Accomplish Real-Time Data Replication with CDC!

CDC (Change Data Capture) is essential for real-time data replication and synchronization. Try Hevo’s no-code platform and see how Hevo has helped customers across 45+ countries by offering:

  1. Real-time data replication with ease. 
  2. CDC Query Mode for capturing both inserts and updates. 
  3. 150+ connectors(including 60+ free sources)

Don’t just take our word for it—listen to customers, such as Thoughtspot, Postman, and many more, to see why we’re rated 4.3/5 on G2.

Get Started with Hevo for Free

What is SQL Server?

Sql server logo

SQL stands for a Structured Query Language and is an Open-Source Relational Database Management System. The Relational Database System is a Database that consists of Databases that are stored in rows and columns. In short, it follows a structure or a format in Databases. With SQL, you can carry out several operations like update, delete, and insert to quickly manage a colossal amount of information effectively. One of the best advantages of SQL is that you can write a single query to access multiple tables of record. SQL also supports programming languages like Java, C++, Ruby, Visual Basic, and more, allowing interoperability to work across applications. 

What are the Steps to Setup Debezium SQL Server Connector?

Step 1: Install the SQL Server Connector

confluent connect plugin install debezium/debezium-connector-sqlserver:latest
  • Use the Confluent CLI to restart Connect.
confluent local services connect stop && confluent local services connect start
Using CONFLUENT_CURRENT: /Users/username/Sandbox/confluent-snapshots/var/confluent.NuZHxXfq
Starting Zookeeper
Zookeeper is [UP]
Starting Kafka
Kafka is [UP]
Starting Schema Registry
Schema Registry is [UP]
Starting Kafka REST
Kafka REST is [UP]
Starting Connect
Connect is [UP]
  • Check if the SQL Server plugin has been correctly installed.
curl -sS localhost:8083/connector-plugins | jq '.[].class' | grep SqlServer
"io.debezium.connector.sqlserver.SqlServerConnector"

Step 2: Create Test Data and Enable Change Data Capture

  • To enable CDC on the monitored database, use the following SQL command:
USE MyDB
GO
EXEC sys.sp_cdc_enable_db
GO
  • Use the following command to enable CDC on each table:
USE MyDB
GO
EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'MyTable', @role_name = N'MyRole', @filegroup_name = N'MyDB_CT', @supports_net_changes = 1
GO
  • Create a .sql file by running the following command:
-- Create the test database
CREATE DATABASE testDB;
GO
USE testDB;
EXEC sys.sp_cdc_enable_db;
-- Create some customers ...
CREATE TABLE customers (
  id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY,
  first_name VARCHAR(255) NOT NULL,
  last_name VARCHAR(255) NOT NULL,
  email VARCHAR(255) NOT NULL UNIQUE
);
INSERT INTO customers(first_name,last_name,email)
  VALUES ('Sally','Thomas','sally.thomas@acme.com');
INSERT INTO customers(first_name,last_name,email)
  VALUES ('George','Bailey','gbailey@foobar.com');
INSERT INTO customers(first_name,last_name,email)
 VALUES ('Edward','Walker','ed@walker.com');
INSERT INTO customers(first_name,last_name,email)
  VALUES ('Anne','Kretchmar','annek@noanswer.org');
EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'customers', @role_name = NULL, @supports_net_changes = 0;
GO
Sync Data from MS SQL Server to MySQL
Sync Data from MySQL to MS SQL Server
Sync Data from MS SQL Server to PostgreSQL
  • Execute this file by running:
sqlcmd -S myServer\instanceName -i C:\inventory.sql

Step 3: Start the Debezium SQL Server connector

There are two ways to do this

  1. Standalone cluster

Create the file register-sqlserver.properties to store the following connector configuration.

name=inventory-connector
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
tasks.max=1
database.server.name=server1
database.hostname=localhost
database.port=1433
database.user=sa
database.password=Password!
database.dbname=testDB
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic=schema-changes.inventory

Run the following command to start the connector:

./bin/connect-standalone  ./etc/.properties ./etc/register-sqlserver.properties
  1. Distributed cluster

Create the file register-sqlserver.json to store the following connector configuration:

{
 "name": "inventory-connector",
 "config": {
     "connector.class" : "io.debezium.connector.sqlserver.SqlServerConnector",
     "tasks.max" : "1",
     "database.server.name" : "server1",
     "database.hostname" : "localhost",
     "database.port" : "1433",
     "database.user" : "sa",
     "database.password" : "Password!",
     "database.dbname" : "testDB",
     "database.history.kafka.bootstrap.servers" : "localhost:9092",
     "database.history.kafka.topic": "schema-changes.inventory"
     }
 }

Run the following command to start the connector:

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" http://localhost:8083/connectors/ -d @register-sqlserver.json

Step 4: Start your Kafka consumer

In a new terminal, run the command:

confluent local services kafka consume server1.dbo.customers --from-beginning

Then to add/modify the data, you can run SQL queries and the changes will be reflected on your consumer terminal.

Conclusion

In this tutorial, the Debezium SQL Server Connectors sink and source are explained. The configuration of the source connectors, like installing, monitoring, and deploying is described in detail. Further, the real-time changes are captured through the connectors in the connector. If you want to export data from various sources like SQL Server into your desired Database/destination, then Hevo Data is the right choice for you! 

Sign up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also look at the unbeatable pricing that will help you choose the right plan for your business needs.

Frequently Asked Questions

1. Does Debezium support SQL Server?

Yes, Debezium supports SQL Server for real-time change data capture (CDC).

2. What is a Debezium server?

Debezium Server is a service that enables streaming change data capture events from databases to external systems, like Kafka or cloud services, without needing Kafka Connect.

3. What is the purpose of Debezium?

The purpose of Debezium is to provide real-time change data capture (CDC), allowing you to stream changes from databases to other systems, enabling event-driven architectures and keeping systems synchronized.

Manjiri Gaikwad
Technical Content Writer, Hevo Data

Manjiri is a proficient technical writer and a data science enthusiast. She holds an M.Tech degree and leverages the knowledge acquired through that to write insightful content on AI, ML, and data engineering concepts. She enjoys breaking down the complex topics of data integration and other challenges in data engineering to help data professionals solve their everyday problems.