Summary IconKey Takeaways

Integrate SQL Server to Snowflake using four easy methods:

Method 1: Hevo Data
No-code setup with real-time sync, auto schema mapping, and built-in transformations.

Method 2: Use SnowSQL
Command-line tool to securely load data from SQL Server to Snowflake.

Method 3: Custom ETL Scripts
Use custom code to extract, transform, and load data into Snowflake.

Method 4: Using Snowpipe
Automate continuous data loading from SQL Server via S3 and Snowflake.

Migrating from SQL Server to Snowflake can be challenging, especially for teams working with complex data or older systems. Choosing the right method, handling schema changes, and minimizing downtime are common hurdles during the process. 

However, Snowflake makes the move worthwhile. Its cloud-native architecture separates storage and compute, allowing it to scale quickly and run multiple workloads without slowing down. As a result, teams get better performance with less effort.

In this guide, we’ll walk you through four reliable ways to migrate your data. Whether you prefer a no-code tool like Hevo, or a more hands-on process, you’ll find an approach that best fits your team’s needs.

Supercharge Your SQL Server to Snowflake Migration with Hevo

Migrating your data from SQL Server to Snowflake doesn’t have to be complex. Relax and go for a seamless migration using Hevo’s no-code platform. With Hevo, you can:

  1. Effortlessly extract data from SQL Server and other 150+ connectors
  2. Tailor your data to Snowflake’s needs with features like drag-and-drop and custom Python scripts.
  3. Achieve lightning-fast data loading into Snowflake, making your data analysis-ready.

Try to see why customers like ThoughtSpot and Harmoney have upgraded to a powerful data and analytics stack by incorporating Hevo!

Get Started with Hevo for Free

What are the Methods to Connect SQL Server to Snowflake?

The following 4 methods can be used to transfer data from Microsoft SQL server to Snowflake easily:

Method 1: Using Hevo Data to Connect SQL Server to Snowflake

The following steps are required to connect Microsoft SQL Server to Snowflake using Hevo’s Data Pipeline:

Step 1: Connect to your Microsoft SQL Server source.

  • Click PIPELINES in the Navigation Bar.
  • Click + CREATE in the Pipelines List View.
  • Select SQL Server as your source.
  • In the Configure your SQL Server Source page, specify the following:
hevo

You can read more about using SQL server as a source connector for Hevo.

Step 2: Configure your Snowflake Data Warehouse as Destination

  • Click DESTINATIONS in the Navigation Bar.
  • Click + CREATE in the Destinations List View.
  • In the Add Destination page, select Snowflake as the Destination type.
  • In the Configure your Snowflake Warehouse page, specify the following:
Configuring Snowflake Warehouse

This is how simple it can be to load data from SQL Server to Snowflake.

Easily move your data from SQL Server to Snowflake to enhance your analytics capabilities. With Hevo’s intuitive pipeline setup, data flows in real-time—check out our 1-minute demo below to see the seamless integration in action!

Method 2: Using SnowSQL

This method involves exporting CSVs from SQL Server and loading them into Snowflake via the CLI.

Step 1: Export Data from SQL Server as CSV

Use SQL Server Management Studio (SSMS) to export your table to a CSV file:

  1. Install and launch SQL Server Management Studio.
  2. Connect to your SQL Server instance.
  3. In Object Explorer, right-click your target databaseTasksExport Data.
  4. The SQL Server Import and Export Wizard will open.
  5. Choose SQL Server Native Client 11.0 as the source and select your database.
  6. Under Authentication, choose Windows Authentication.
  7. Click Next, then choose Flat File Destination as the output.
  8. Browse to select the destination CSV file.
  9. Click Next, then choose tables or views to export.
  10. Configure column mapping if needed.
  11. Click Finish to export and verify the success message.

Step 2: Upload the CSV File to Amazon S3

Once exported, the CSV file must be uploaded to an Amazon S3 bucket.

A. Create a New S3 Bucket and Folder

  1. Go to the AWS S3 Console.
  2. Click Create Bucket, enter a unique name, and select your preferred region.
  3. After the bucket is created, click its name to open it.
  4. Click ActionsCreate Folder and give it a name (e.g., sql-exports/).

B. Upload the CSV File to Your S3 Bucket

  1. Navigate into the folder you just created.
  2. Click Upload → select Add Files.
  3. In the dialog box that appears, choose the CSV file you exported from SQL Server.
  4. Click Open, then click Start Upload to complete the process.

✅ Once the file is uploaded, it’s ready to be accessed from Snowflake.

Load your data from MS SQL Server to Snowflake
Load your data from Salesforce to Snowflake
Load your data from MongoDB to Snowflake

Step 3: Upload Data to Snowflake from S3

Now that your CSV file is stored in Amazon S3, you’ll configure Snowflake to access it and load the data using the COPY INTO command. 

This involves setting up permissions and integrations between Snowflake and AWS. 

Step 3.1: Set Permissions for Your S3 Bucket

You need to grant Snowflake permission to read/write from your S3 bucket.

Actions to allow:

  • s3:PutObject
  • s3:GetObject
  • s3:GetObjectVersion
  • s3:DeleteObject
  • s3:DeleteObjectVersion
  • s3:ListBucket

Sample policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowListingOfUserFolder",
      "Action": ["s3:ListBucket", "s3:GetBucketLocation"],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::bucket_name"]
    },
    {
      "Sid": "HomeDirObjectAccess",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObjectVersion",
        "s3:DeleteObject",
        "s3:GetObjectVersion"
      ],
      "Resource": "arn:aws:s3:::bucket_name/*"
    }
  ]
}

Step 3.2: Create an AWS IAM Role

Create a new IAM role in AWS and copy its ARN located on the role summary page — you’ll need it when setting up the Snowflake integration.

Step 3.3: Create a Storage Integration in Snowflake

Run the following SQL in Snowflake to create a storage integration:

CREATE STORAGE INTEGRATION my_s3_integration
  TYPE = EXTERNAL_STAGE
  STORAGE_PROVIDER = S3
  ENABLED = TRUE
  STORAGE_AWS_ROLE_ARN = '<your_iam_role_arn>'
  STORAGE_ALLOWED_LOCATIONS = ('s3://your-bucket/path/');

Step 3.4: Retrieve Snowflake’s IAM User Details

Run:

DESC INTEGRATION my_s3_integration;

Note down:

  • STORAGE_AWS_IAM_USER_ARN
  • STORAGE_AWS_EXTERNAL_ID

These are required to update the trust policy in AWS.

Step 3.5: Update AWS IAM Role’s Trust Policy

In AWS Console:

  1. Go to IAMRoles and open your IAM role.
  2. Click Trust relationshipsEdit trust policy.
  3. Use the following JSON, replacing placeholders:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<STORAGE_AWS_IAM_USER_ARN>"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "<STORAGE_AWS_EXTERNAL_ID>"
        }
      }
    }
  ]
}
  1. Click Update Trust Policy.

Step 3.6: Create an External Stage in Snowflake

Use the integration and file path to create a stage:

USE SCHEMA mydb.public;

CREATE STAGE my_s3_stage
  STORAGE_INTEGRATION = my_s3_integration
  URL = 's3://your-bucket/path/'
  FILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = ',' SKIP_HEADER = 1);

Step 3.7: Load Data Using COPY INTO

Once the stage is ready, run the COPY INTO command to load data into your target Snowflake table:

COPY INTO my_table
FROM @my_s3_stage
FILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = ',' SKIP_HEADER = 1);

This pulls the data from all files in the S3 folder into your table.

Limitations of This Method

Not ideal for users unfamiliar with IAM policies and Snowflake configuration.

Only supports files up to 160 GB. For larger files, use the Amazon S3 REST API.

No support for real-time streaming or Change Data Capture (CDC).

The process involves multiple steps and may be error-prone without automation.

Method 3: Using Custom ETL Scripts

This method involves writing code to move data from SQL Server to Snowflake manually. It gives you flexibility but also requires more time, effort, and technical know-how.

Steps to Use Custom ETL Scripts

1. Install Required Drivers or Libraries

Install the Snowflake ODBC driver or a client library for your chosen programming language (e.g., Python, Java, Node.js).

2. Gather Connection Details

Make sure you have all the necessary info to connect to Snowflake:

  • Account name
  • Username and password
  • Warehouse, database, and schema names

3. Set Up Your Script Environment

Pick a language (like Python or Java). Then:

  • Connect to SQL Server using a driver (e.g., pyodbc or sqlalchemy in Python)
  • Connect to Snowflake using the Snowflake connector

4. Extract the Data

Write a SQL query to select the data you want from SQL Server and run it inside your script.

5. Transform (if needed)

Clean or reformat the data as required before loading. For e.g., remove nulls, rename columns, or reformat dates.

6. Load Into Snowflake

Use the Snowflake connector to insert or bulk-load the data into your destination table.

Drawbacks of Using Custom Scripts

While this approach offers flexibility, it also comes with a few major limitations:

Slower Setup: Developing the full flow (connectors, queries, transformations, loads) takes longer than using managed tools.

High Maintenance: Writing and managing code takes time, especially when data sources or requirements change.

Limited Scalability: Scripts may struggle with large volumes of data or complex transformation logic.

Security Risks: Hardcoding credentials or poor handling of sensitive data can open security gaps.

Minimal Monitoring: Scripts don’t come with built-in logs or alerts, so you’ll have to build your own.

Method 4: Using Snowpipe

Snowpipe is Snowflake’s continuous data loading service. It automatically loads files from external storage like Amazon S3 into your Snowflake tables with no manual triggering needed.

This method is best suited for near real-time loading scenarios.

Steps to Use Snowpipe

1. Create an External Stage

First, create a stage that points to the S3 bucket and folder where your CSV files are stored.

CREATE STAGE my_stage
  URL = 's3://your-bucket/path/'
  FILE_FORMAT = (TYPE = CSV);

2. Create a Pipe in Snowflake

A pipe defines how data moves from your stage to your table. It also listens for new files.

CREATE OR REPLACE PIPE my_pipe AS
COPY INTO my_table
FROM @my_stage
FILE_FORMAT = (TYPE = CSV);

3. Enable Auto-Ingest (Optional but Recommended)

To automate ingestion, configure Snowflake to listen to file events from S3 via AWS SNS.

You’ll need to:

  • Set up an AWS SNS topic and SQS queue
  • Subscribe Snowflake to those notifications
  • Link it to the pipe with auto-ingest enabled

More info in Snowflake’s Snowpipe Auto-Ingest Docs

4. Export Data from SQL Server to CSV

Use SQL Server Management Studio or a batch script to export data to CSV format.

5. Schedule the Export and Upload

Use Windows Task Scheduler or cron to regularly:

  • Export fresh data from SQL Server
  • Upload it to the S3 bucket

This completes the automated pipeline — SQL Server → S3 → Snowpipe → Snowflake.

Drawbacks of the Snowpipe Method

While Snowpipe offers a low-latency and automated way to load data into Snowflake, it comes with a few important limitations:

No Built-in Transformations

Snowpipe is a load-only solution. It doesn’t support data transformations during ingestion. For cleansing or reshaping data, you’ll need to use SQL transformations or pair it with an ETL tool.

File Size Limitations

Snowpipe supports files up to ~160 GB. Larger files require you to split them manually or use the Amazon S3 REST API, which adds operational complexity.

Not True Real-Time / CDC

Snowpipe handles micro-batch ingestion well but doesn’t support true real-time Change Data Capture (CDC). It can miss granular changes unless you add your own tracking logic.

Limited Error Handling

If a file fails to load, Snowpipe won’t automatically retry or alert you unless configured. You’ll need to set options like ON_ERROR = CONTINUE in your COPY INTO command to avoid halts.

What is MS SQL Server?

Microsoft SQL Server (MS SQL Server) is a relational database management system (RDBMS) developed by Microsoft. It’s used to store and retrieve data as requested by other software applications, which may run either on the same computer or on another computer across a network. 

MS SQL Server is designed to handle a wide range of data management tasks and supports various transaction processing, business intelligence, and analytics applications.

Key Features of SQL Server:

  • Scalability: Supports huge databases and multiple concurrent users.
  • High Availability: Features include Always On and Failover clustering.
  • Security: Tight security through solid encryption, auditing, row-level security.
  • Performance: High-speed in-memory OLTP and Columnstore indexes
  • Integration: Integrates very well with other Microsoft services and Third-Party Tools
  • Data Tools: In-depth tools for ETL, reporting, data analysis
  • Cloud Integration: Comparatively much easier to integrate with Azure services
  • Management: SQL Server Management Studio for the management of Databases
  • Backup and Recovery: Automated Backups, Point-in-Time Restore.
  • T-SQL: Robust Transact-SQL in complex queries and stored procedures.

What is Snowflake?

Snowflake is a cloud-based data warehousing platform that is designed to handle large-scale data storage, processing, and analytics. It stands out due to its architecture, which separates compute, storage, and services, offering flexibility, scalability, and performance improvements over traditional data warehouses.

Key Features of Snowflake:

Data Integration: Supports diverse data formats and ETL tools.

Scalability: Seamless scaling of storage and compute independently.

Performance: Fast query performance with automatic optimization.

Data Sharing: Secure and easy data sharing across organizations.

Multi-Cloud: Operates on AWS, Azure, and Google Cloud.

Security: Comprehensive security features including encryption and role-based access.

Zero Maintenance: Fully managed with automatic updates and maintenance.

Why migrate data from MS SQL Server to Snowflake?

Elastic Scalability: Unlike SQL Server, which often requires manual provisioning, Snowflake offers on-demand scaling. You can adjust compute power based on workload and only pay for what you use, reducing both effort and cost.

Lower Operational Overhead: Snowflake is fully managed. It eliminates the need for maintaining on-premises infrastructure, freeing your IT team to focus on higher-value data tasks.

High Performance & Concurrency: Snowflake is built for fast, parallel queries and supports many users at once, making it ideal for large-scale analytics and reporting workloads.

What are the Additional Resources on SQL Server to Snowflake?

Conclusion

This article walked you through how to migrate data from SQL Server to Snowflake, covering four step-by-step methods to help you easily connect the two platforms.

It also highlighted the pros and cons of each approach. While the manual method using SnowSQL can get the job done, it has a few limitations around scalability, automation, and real-time sync.

If you’re looking for a faster, more straightforward way to migrate, Hevo Data offers a no-code solution with real-time syncing and zero data loss. Try Hevo Data for free and make your SQL Server to Snowflake migration effortless.

    FAQ on SQL Server to Snowflake

    Can you connect SQL Server to Snowflake?

    Connecting the SQL server to Snowflake is a straightforward process. You can do this using ODBC drivers or through automated platforms like Hevo, making the task more manageable.

    How to migrate data from SQL to Snowflake?

    To migrate your data from SQL to Snowflake using the following methods:
    Method 1: Using SnowSQL to connect the SQL server to Snowflake
    Method 2: Using Custom ETL Scripts to connect SQL Server to Snowflake
    Method 3: Using Hevo Data to connect Microsoft SQL Server to Snowflake
    Method 4: SQL Server to Snowflake Using Snowpipe

    Why move from SQL Server to Snowflake?

    We need to move from SQL Server to Snowflake because it provides:
    1. Enhanced scalability and elasticity.
    2. Reduced operational burden.
    3. High concurrency and fast query performance.

    Can SQL be used for snowflakes?

    Yes, snowflake provides a variant called Snowflake SQL which is ANSI SQL-compliant.

    Talha
    Software Developer, Hevo Data

    Talha is a Software Developer with over eight years of experience in the field. He is currently driving advancements in data integration at Hevo Data, where he has been instrumental in shaping a cutting-edge data integration platform for the past four years. Prior to this, he spent 4 years at Flipkart, where he played a key role in projects related to their data integration capabilities. Talha loves to explain complex information related to data engineering to his peers through writing. He has written many blogs related to data integration, data management aspects, and key challenges data practitioners face.