SQL Server Transactional Replication: 3 Easy Steps


SQL Server Transactional Replication


Are you new to SQL Server or interested in using its transactional replication feature to make copies of your database objects? If this applies to you, then you’ve come to the right blog! This blog will introduce SQL Server transactional replication and provide steps on how to use it. We’ll do this by addressing the following points:

What is Transactional Replication

Replication generally refers to copying and distributing database objects from one database to another in a synchronized way so as to maintain data consistency. Replication can distribute data to different databases and servers over the internet, wide area networks, local area networks, etc.

SQL Server supports different types of replication, these include merge replication, snapshot replication, peer-to-peer replication, bidirectional replication, updatable subscriptions and transactional replications.

In transactional replication, changes that occur in the publisher are delivered to the subscriber as they happen (in near real-time). These changes are also applied to the subscriber in the same order as they occurred in the publisher.

Replication uses a publishing industry analogy and borrows a few terms from it. These terms are widely used to explain replication concepts and understanding these terms enhance our knowledge of transactional replication. They are:

  1. Publisher: This is a database instance which makes data available to other locations. 
  2. Subscriber: This is the database instance that receives the data that has been replicated. 
  3. Distributor: Database instance that stores specific data that is associated with one or more publishers.
  4. Article: This identifies a database object that is in a publication.
  5. Publication: A collection of articles from a database.
  6. Subscription: A request for a copy of the publication which is delivered to a subscriber. 

Simplify Data Analysis with Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline helps to load data from any data source such as twilio, Databases, SaaS applications, Cloud Storage, SDKs, and Streaming Services and simplifies the ETL process. It supports 100+ data sources (including 30+ free data sources) like Asana and is a 3-step process by just selecting the data source, providing valid credentials, and choosing the destination. Hevo not only loads the data onto the desired Data Warehouse/destination but also enriches the data and transforms it into an analysis-ready form without having to write a single line of code.


Its completely automated pipeline offers data to be delivered in real-time without any loss from source to destination. Its fault-tolerant and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. The solutions provided are consistent and work with different BI tools as well.

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.

Transactional Replication Use Cases

The following are some use cases that necessitate the implementation of transactional replication:

  • You want subscribers to get incremental changes as they occur.
  • Your application needs to access intermediate data states. For example, if a row changes multiple times, the application will respond to each change and will not just respond to the last change.
  • The publisher has a very high volume of update, insert and delete activity.
  • Either the publisher or the subscriber is a non-SQL Server database.
  • You require low latency between changes made at the publisher and those reflected at the subscriber.

Steps for Transactional Replication


The following are the 3 high-level steps to set up SQL Server transactional replication:

1. Distributor Configuration

  • Open SSMS and connect to the SQL Server Instance.
  • Navigate to Object Explorer, right-click Replication, and click Configure Distribution.
Configure Distribution
  • The Configure Distribution Wizard will appear. Click Next. Leave the default settings and click Next again when you are asked to choose the server that acts as the distributor.
Configure Distribution Wizard
  • Choose the storage location for your replication snapshot. This can be a folder on the server or a network share.
Snapshot folder
  • Name your distribution database and specify locations to store the data and log files.
sql server transactional replication- configure database
  • Under Publishers, select the replication publishers that will access the distributor and select Next.
  • Under Wizard Actions, select Configure distribution and click Next.
Wizard Actions
  • Finally review the settings and click Finish.
Complete the Wizard
  • After the configuration is complete, you will get a success message.

2. Publication Configuration

  • Create a Publication and select the database you intend to be your Publication database.
sql server transactional replication- configure publication
  • Specify the “Transactional publication” option for the publication type.
Transactional publication
  • Select the required tables and filter as needed.
sql server transactional replication- filter table rows
  • Run the snapshot to create it and to send data to the distributor.
Snapshot Agent
  • Configure security.

3. Subscription Configuration

  • Expand the server node and expand the replication folder.
  • Right-click on your publication and select new subscriptions.
sql server transactional replication- configure subscription
  • Specify whether it is a pull/push subscription.
  • Select your server and create a database. Configure your database.
sql server transactional replication- configure database
  • Configure security.
sql server transactional replication- configure security
  • Configure schedule and initialize the publication.

After you have followed the steps, you can right-click on the publication and view the snapshot agent status to ensure that it has been completed. Further details on SQL Server transactional replication can be found here.

Transactional Replication vs Change Tracking

Change Tracking similar to SQL Server Transactional Replication is available in the Standard Edition. It is generally recommended that users enable snapshot isolation on the database leveraging Change Tracking and explicitly use it while querying Change Tracking tables. This ensures that you don’t miss data or cause any blockage with the Change Tracking cleanup process.

Apart from this, Change Tracking also incurs additional overhead. For every row that gets modified in a tracked table, a row is also added to the changing table. For every transaction that commits, a row is inserted as well into an internal transaction table. So, SQL Server Transactional Replication is deemed better than Change Tracking due to more freedom with isolation tables, having much less custom code to write, and no additional inserts in internal tables.

Transactional Replication vs Change Data Capture

Change Data Capture allows you to write your own polling. Say, if you poll every hour, you just need to pick up the latest version of the row and take it with you. Since you don’t need to carry forward every command, it is super-lightweight. When you enable CDC for an SQL Server table, SQL Server starts tracking updates, inserts, and deletes in the transaction log similar to the SQL Server Transactional Replication. However, when you make any changes they are read from the log and then inserted into “change tables” associated with the tracked tables. However, CDC’s whitepaper suggests avoiding scenarios where rows can be updated immediately after insert.

This comparison highlights that even though SQL Server Transactional Replication commands can get backed up in the distributor, the fact that SQL Server Transactional Replication allows distribution to be offloaded to its own server independent from the publisher represents its strength. Hence, due to extra logging caused by internal change tables along with the limitations on schema changes SQL Server Transactional Replication beats Change Data Capture.

Transactional Replication vs Availability Groups

The SQL Server 2012 AlwaysOn Availability Groups let you scale out reads spanning across multiple SQL Server instances. You can easily set up readable secondary copies of a database which can then have their own memory, independent storage, and server resources. The secondaries can exist in either an asynchronous or synchronous mode. It also allows you to offload backups to secondaries thus giving SQL Availability Groups a clear edge over SQL Server Transactional Replication. SQL Availability Groups are the go-to option if you are looking for high availability since even in the Standard edition of SQL Server Availability Groups, a simple two-node failover cluster beat SQL Server Transactional Replication.

In terms of scaling reads, however, SQL Availability Groups still have a lot of scope for improvement. SQL Server rolled out a new feature where temporary statistics are created for read-only databases that help this feature out but it still has a few limitations.

Limitations of Transactional Replication

A key takeaway from this article is that Transactional Replication does not serve as a good way of serving as a backup if a database gets corrupted or a server fails. Transactional Replication shouldn’t be regarded as a feature of disaster recovery and high availability but instead, as a programmability feature to distribute data from a couple of critical tables from one instance out to various subscribers. Transactional Replication does not offer support for the following situations:

  • Connection String Enhancements
  • Failback or Failover
  • Load Balancing
  • Protection of the Replicated Data
  • Automatic Page Repair from Corruption

Requirements for Supporting Transactional Replication

If you want to leverage Transactional Replication, you need to make sure that you have the right staff in place. To support mission-critical data, you need to have the following requirements in place:

  • pre-production environment
  • root cause analysis
  • established on-call rotation
  • dedicated SQL Server DBAs
  • comprehensive incident response system
  • good Change Management Processes

Apart from the aforementioned requirements you also need to have the desire and ability to extend and refine production monitoring.


Transactional replication is a powerful tool for SQL Server. Following these steps will help ensure that you are able to copy and synchronize your data to a target instantly.

visit our website to explore hevo[/hevoButton]

Hevo is a No-code Data Pipeline. It can migrate the data from SQL Server to your desired data warehouse in minutes. It offers hassle-free data integration from 100+ data sources.

SIGN UP for a 14-day free trial and see the difference!

Leave us a comment about your experience working with SQL Server transactional replication. Get in touch with us in the comments section below.

Rashid Y
Freelance Technical Content Writer, Hevo Data

Rashid is passionate about freelance writing within the data industry, and delivers informative and engaging content on data science by incorporating his problem-solving skills.

No-Code Data Pipeline for SQL Server