With growing volumes of data, is your SQL Server getting slow for analytical queries? Are you simply migrating data from MS SQL Server to Redshift? Whatever your use case, we appreciate your smart move to transfer data from MS SQL Server to Redshift.

This article, in detail, covers the various approaches you could use to load data from SQL Server to Redshift.

This article covers the steps involved in writing custom code to load data from SQL Server to Amazon Redshift. Towards the end, the blog also covers the limitations of this approach.

Note: For MS SQL to Redshift migrations, compatibility and performance optimization for the transferred SQL Server workloads must be ensured.

What is MS SQL Server?

Microsoft SQL Server is a relational database management system (RDBMS) developed by Microsoft. It is designed to store and retrieve data as requested by other software applications, which can run on the same computer or connect to the database server over a network.

Some key features of MS SQL Server:

  1. It is primarily used for online transaction processing (OLTP) workloads, which involve frequent database updates and queries.
  2. It supports a variety of programming languages, including T-SQL (Transact-SQL), .NET languages, Python, R, and more.
  3. It provides features for data warehousing, business intelligence, analytics, and reporting through tools like SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS).
  4. It offers high availability and disaster recovery features like failover clustering, database mirroring, and log shipping.
  5. It supports a wide range of data types, including XML, spatial data, and in-memory tables.

What is Amazon Redshift?

Amazon Redshift is a cloud-based data warehouse service offered by Amazon Web Services (AWS). It’s designed to handle massive amounts of data, allowing you to analyze and gain insights from it efficiently. Here’s a breakdown of its key features:

  • Scalability: Redshift can store petabytes of data and scale to meet your needs.
  • Performance: It uses a parallel processing architecture to analyze large datasets quickly.
  • Cost-effective: Redshift offers pay-as-you-go pricing, so you only pay for what you use.
  • Security: Built-in security features keep your data safe.
  • Ease of use: A fully managed service, Redshift requires minimal configuration.

Understanding the Methods to Connect SQL Server to Redshift

A good understanding of the different Methods to Migrate SQL Server To Redshift can help you make an informed decision on the suitable choice.

These are the three methods you can implement to set up a connection from SQL Server to Redshift in a seamless fashion:

Method 1: Using Hevo Data to Connect SQL Server to Redshift

Hevo Logo
Image Source

Hevo helps you directly transfer data from SQL Server and various other sources to a Data Warehouse, such as Redshift, or a destination of your choice in a completely hassle-free & automated manner. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled securely and consistently with zero data loss.

Sign up here for a 14-Day Free Trial!

Hevo takes care of all your data preprocessing to set up SQL Server Redshift migration and lets you focus on key business activities and draw a much more powerful insight on how to generate more leads, retain customers, and take your business to new heights of profitability. It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. 

Step 1: Configure MS SQL Server as your Source

  • Click PIPELINES in the Navigation Bar.
  • Click + CREATE in the Pipelines List View.
  • In the Select Source Type page, select the SQL Server variant
  • In the Configure your SQL Server Source page, specify the following:
sql-server-configuration

Step 2: Select the Replication Mode

Select the replication mode: (a) Full Dump and Load  (b) Incremental load for append-only data (c) Incremental load for mutable data.

Step 3: Integrate Data into Redshift

  • Click DESTINATIONS in the Navigation Bar.
  • Click + CREATE in the Destinations List View.
  • In the Add Destination page, select Amazon Redshift.
  • In the Configure your Amazon Redshift Destination page, specify the following:
Redshift Destination Config
Image Source

As can be seen, you are simply required to enter the corresponding credentials to implement this fully automated data pipeline without using any code.

Check out what makes Hevo amazing:

  • Real-Time Data Transfer: Hevo with its strong Integration with 100+ sources, allows you to transfer data quickly & efficiently. This ensures efficient utilization of bandwidth on both ends.
  • Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer. 
  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled securely and consistently with zero data loss.
  • Tremendous Connector Availability: Hevo houses a large variety of connectors and lets you bring in data from numerous Marketing & SaaS applications, databases, etc. such as Google Analytics 4, Google Firebase, Airflow, HubSpot, Marketo, MongoDB, Oracle, Salesforce, Redshift, etc. in an integrated and analysis-ready form.
  • Simplicity: Using Hevo is easy and intuitive, ensuring that your data is exported in just a few clicks. 
  • Completely Managed Platform: Hevo is fully managed. You need not invest time and effort to maintain or monitor the infrastructure involved in executing codes.
Get Started with Hevo for Free

Method 2: Using Custom ETL Scripts to Connect SQL Server to Redshift

SQL Server to Redshift ETL Process
Image Source

As a pre-requisite to this process, you will need to have installed Microsoft BCP command-line utility. If you have not installed it, here is the link to download it.

For demonstration, let us assume that we need to move the ‘orders’ table from the ‘sales’ schema into Redshift. This table is populated with the customer orders that are placed daily.

There might be two cases you will consider while transferring data.

  1. Move data for one time into Redshift.
  2. Incrementally load data into Redshift. (when the data volume is high)

Let us look at both scenarios:

One Time Load

You will need to generate the .txt  file of the required SQL server table using the BCP command as follows :

Open the command prompt and go to the below path to run the BCP command

C:Program Files <x86>Microsoft SQL ServerClient SDKODBC130ToolsBinn

Run BCP command to generate the output file of the SQL server table Sales

bcp "sales.orders" out D:outorders.txt -S "ServerName" -d Demo -U UserName -P Password -c
sql server to redshift load file command

Note: There might be several transformations required before you load this data into Redshift. Achieving this using code will become extremely hard. A tool like Hevo, which provides an easy environment to write transformations, might be the right thing for you. Here are the steps you can use in this step:

Step 1: Upload Generated Text File to S3 Bucket

We can upload files from local machines to AWS using several ways. One simple way is to upload it using the file upload utility of S3. This is a more intuitive alternative.
You can also achieve this AWS CLI, which provides easy commands to upload it to the S3 bucket from the local machine.

As a pre-requisite, you will need to install and configure AWS CLI if you have not already installed and configured it. You can refer to the user guide to know more about installing AWS CLI.


Run the following command to upload the file into S3 from the local machine

aws s3 cp D:orders.txt s3://s3bucket011/orders.txt
sql server to redshift s3 bucket

Step 2: Create Table Schema

CREATE TABLE sales.orders (order_id INT,
  customer_id INT,
  order_status int,
  order_date DATE,
  required_date DATE,
  shipped_date DATE,
  store_id INT,
  staff_id INT
 )

After running the above query, a table structure will be created within Redshift with no records in it.  To check this, run the following query:

Select * from sales.orders

Step 3: Load the Data from S3 to Redshift Using the Copy Command

COPY dev.sales.orders FROM 's3://s3bucket011/orders.txt'
iam_role 'Role_ARN' delimiter 't';

You will need to confirm if the data has loaded successfully. You can do that by running the query.

Select count(*) from sales.orders

This should return the total number of records inserted.

sql server to redshift copy command

Limitations of using Custom ETL Scripts to Connect SQL Server to Redshift

  • In cases where data needs to be moved once or in batches only, the custom ETL script method works well. This approach becomes extremely tedious if you have to copy data from MS SQL to Redshift in real-time.
  • In case you are dealing with huge amounts of data, you will need to perform incremental load. Incremental load (change data capture) becomes hard as there are additional steps that you need to follow to achieve it.
  • Transforming data before you load it into Redshift will be extremely hard to achieve.
  • When you write code to extract a subset of data often those scripts break as the source schema keeps changing or evolving. This can result in data loss.

The process mentioned above is frail, erroneous, and often hard to implement and maintain. This will impact the consistency and availability of your data into Amazon Redshift.

Download the Cheatsheet on How to Set Up High-performance ETL to Redshift
Download the Cheatsheet on How to Set Up High-performance ETL to Redshift
Download the Cheatsheet on How to Set Up High-performance ETL to Redshift
Learn the best practices and considerations for setting up high-performance ETL to Redshift

Method 3: Using AWS Database Migration Service (DMS)

SQL Server to Redshift AWS DMS
Image Source

AWS Database Migration Service (DMS) offers a seamless pathway for transferring data between databases, making it an ideal choice for moving data from SQL Server to Redshift. This fully managed service is designed to minimize downtime and can handle large-scale migrations with ease.

For those looking to implement SQL Server CDC (Change Data Capture) for real-time data replication, we provide a comprehensive guide that delves into the specifics of setting up and managing CDC within the context of AWS DMS migrations.

Detailed Steps for Migration:

  1. Setting Up a Replication Instance: The first step involves creating a replication instance within AWS DMS. This instance acts as the intermediary, facilitating the transfer of data by reading from SQL Server, transforming the data as needed, and loading it into Redshift.
  2. Creating Source and Target Endpoints: After the replication instance is operational, you’ll need to define the source and target endpoints. These endpoints act as the connection points for your SQL Server source database and your Redshift target database.
  3. Configuring Replication Settings: AWS DMS offers a variety of settings to customize the replication process. These settings are crucial for tailoring the migration to fit the unique needs of your databases and ensuring a smooth transition.
  4. Initiating the Replication Process: With the replication instance and endpoints in place, and settings configured, you can begin the replication process. AWS DMS will start the data transfer, moving your information from SQL Server to Redshift.
  5. Monitoring the Migration: It’s essential to keep an eye on the migration as it progresses. AWS DMS provides tools like CloudWatch logs and metrics to help you track the process and address any issues promptly.
  6. Verifying Data Integrity: Once the migration concludes, it’s important to verify the integrity of the data. Conducting thorough testing ensures that all data has been transferred correctly and is functioning as expected within Redshift.

The duration of the migration is dependent on the size of the dataset but is generally completed within a few hours to days. The sql server to redshift migration process is often facilitated by AWS DMS, which simplifies the transfer of database objects and data

For a step-by-step guide, please refer to the official AWS documentation.

Limitations of Using DMS:

  • Not all SQL Server features are supported by DMS. Notably, features like SQL Server Agent jobs, CDC, FILESTREAM, and Full-Text Search are not available when using this service.
  • The initial setup and configuration of DMS can be complex, especially for migrations that involve multiple source and target endpoints.

Conclusion

That’s it! You are all set. Hevo will take care of fetching your data incrementally and will upload that seamlessly from MS SQL Server to Redshift via a real-time data pipeline.

Extracting complex data from a diverse set of data sources can be a challenging task and this is where Hevo saves the day! 

Visit our Website to Explore Hevo

Hevo offers a faster way to move data from Databases or SaaS applications like SQL Server into your Data Warehouse like Redshift to be visualized in a BI tool. Hevo is fully automated and hence does not require you to code. 

Sign Up for a 14-day free trial to try Hevo for free. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Tell us in the comments about data migration from SQL Server to Redshift!

Rushik Shah
Freelance Technical Content Writer, Hevo Data

Rushik helps businesses solve problems and stay ahead of the competition through his content on data, software architecture, and integration.

No-code Data Pipeline for Redshift