- Teams migrate SQL Server to Snowflake to gain elastic scalability, faster analytics, lower infrastructure overhead, and better support for modern BI and AI workloads.
- You can migrate data using four methods:
- Fully automated ETL platforms like Hevo.
- Manual exports.
- Snowflake-native connectors.
- Linked server with ODBC.
- Manual and native options are flexible and suitable if you have a dedicated engineering team to set up, monitor, and handle ongoing maintenance.
- The right choice depends on data volume, update frequency, schema volatility, and internal engineering bandwidth.
- If you don’t want to spend days configuring an integration, Hevo offers a simple, no-code solution without any operational risk.
Migrating data from SQL Server to Snowflake is rarely straightforward. If you’re working with legacy schemas, growing tables, or systems that were never designed for analytics, there is a constant risk of broken pipelines, data gaps, and extended downtime.
Snowflake makes the move worthwhile. Its cloud-native architecture helps you scale workloads independently and support analytics without burdening operational systems. This reduces infrastructure management and improves query performance.
This guide walks you through four proven ways to migrate data from SQL Server to Snowflake. Whether you want to rely on a no-code platform like Hevo or prefer a hands-on method, you’ll find an approach that matches your team’s requirements.
Table of Contents
Comparison Table
| Criteria | Hevo Data: automated ETL | Manual export | Snowflake-native OpenFlow | Linked server with ODBC |
| Best for | No-code, reliable pipelines with transparent pricing | Batch or scheduled loads | Native Snowflake ingestion | Cross-platform ad-hoc queries |
| Ease of use | Easy | Difficult | Moderate | Difficult |
| Setup time | Within minutes | High | Moderate | High |
| Real-time sync | ✅ | ❌ | ✅ | ❌ |
| Automation | Fully automated | Manual | Fully automated | Query only |
| Maintenance needs | Minimal | High | Medium | High |
| Scalability | Auto-scaling capabilities | Required manual configuration | Scales with Snowflake | Not for bulk data |
| Handle schema changes | Auto-detects and adapts | Manual updates required | Tracks through journal tables | Manual reconfiguration needed |
What are the Methods to Connect SQL Server to Snowflake?
There are four methods to transfer data from Microsoft SQL Server to Snowflake:
- Method 1: Using an automated ETL/ELT tool like Hevo Data
- Method 2: Manual export using Snowflake’s COPY INTO or Snowpipe
- Method 3: Using Snowflake-native connector like OpenFlow
- Method 4: Connect the linked server with ODBC
Method 1: Automated ETL/ELT tools like Hevo
Hevo Data provides a no-code, automated solution to move SQL Server data to Snowflake in real-time. Follow these steps to set up your pipeline.
Prerequisites and architectural considerations
- Running SQL Server version 12 or higher.
- Database user with SELECT privileges. For Change Tracking or Table mode, VIEW CHANGE TRACKING and ALTER DATABASE are also required.
- Active Snowflake account with the ACCOUNTADMIN or SYSADMIN role if creating a warehouse.
- Hevo user with Team Collaborator or admin role, excluding Billing Administrator, for creating the Destination.
Step 1: Configure SQL Server as a source
- Go to PIPELINES in the Navigation Bar.
- Click + Create Pipelines in the Pipelines List View.
- On the Select Source Type page, choose SQL Server.
- Select your Destination type, then choose the Pipeline Mode and Pipeline Type.
- On the Configure your SQL Server Source page, enter the following details:
- Pipeline name.
- Database Host, Port, User, Password, and Name.
- Schema Name.
- Connecting through SSH is optional, but it is ideal for added security.
- Click Test Connection, then Test & Continue to save.
You have successfully created SQL Server as a source. Now, let’s set up your Snowflake destination.
Step 2: Configure Snowflake as a destination
- Go to DESTINATIONS in the Navigation Bar.
- Click + Create Standard Destination in the Destination List View.
- Select Snowflake as the Destination type.
- Provide the following connection details on the Configure your Snowflake Destination page:
- Destination Name.
- Choose an authentication method from Key Pair Authentication or Access Credentials.
- Account URL, Database User, Warehouse, Database Name, Database Schema.
- If using the Key Pair, provide Private Key and Passphrase.
- Enable Populate Loaded Timestamp or Create Transient Tables as needed.
- Click Test Connection, then Save & Continue.
Once configured, your SQL Server data will start flowing into Snowflake automatically.
You can modify the Snowflake Destination later through the Settings menu, which will apply changes to all connected Pipelines. Find out the steps here.
This is how simple it can be to migrate SQL Server to Snowflake.
Don’t take our word for it. Try Hevo for free today!
Method 2: Manual export with Snowflake’s COPY INTO command or Snowpipe
This method uses Amazon S3 as an intermediate staging layer to move exported SQL Server files into Snowflake using COPY INTO or Snowpipe.
Prerequisites and architectural considerations
- A private Amazon S3 bucket to host your data files.
- Snowflake object that allows secure, credential-less communication with your S3 bucket through AWS IAM roles.
- SQL Server Management Studio (SSMS) to generate a CSV file.
Step 1:
Use SQL Server Management Studio (SSMS) to export your table to a CSV file.
- Install and launch SQL Server Management Studio.
- Connect to your SQL Server instance.
- In Object Explorer, right-click your target database → Tasks → Export Data.
- The SQL Server Import and Export Wizard will open.
- Choose SQL Server Native Client as the source and select your database.
- Choose Windows or SQL Server Authentication based on your environment.
- Click Next, then choose Flat File Destination as the output.
- Browse to select the destination CSV file.
- Click Next, then choose tables or views to export.
- Configure column mapping if needed.
- Click Finish to export and verify the success message.
Ensure column names, data types, and date formats are consistent before export to avoid load errors later.
Step 2: Upload files to an Amazon S3 bucket
Snowflake loads data from cloud storage, so your exported files must be staged in Amazon S3.
For this, you must first create an S3 bucket.
- Create a New S3 Bucket and Folder
- Go to the AWS S3 Console.
- Click Create Bucket, enter a unique name, and select your preferred region.
- After the bucket is created, click its name to open it.
- Click Actions → Create Folder and give it a name (e.g., sql-exports/).
Now, you can upload the exported file.
- Navigate into the folder you just created.
- Click Upload → select Add Files.
- In the dialog box that appears, choose the CSV file you exported from SQL Server.
- Click Open, and then Start Upload to complete the process.
Once the file is uploaded, it’s ready to be accessed from Snowflake.
Note: To allow Snowflake to read files from S3, you must create an AWS IAM Role and a Snowflake Storage Integration. This role grants Snowflake permission to access your S3 bucket securely without embedding credentials. Learn how to do this here.
Step 3: Load data into Snowflake using COPY INTO for batch ingestion
In Snowflake, you first need to create an external stage pointing to S3, then load data using COPY INTO.
Use the integration and file path to create a stage:
USE SCHEMA mydb.public;CREATE STAGE my_s3_stage STORAGE_INTEGRATION = my_s3_integration URL = 's3://your-bucket/path/' FILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = ',' SKIP_HEADER = 1);
Once the stage is ready, run the COPY INTO command to batch load data into your target Snowflake table:
COPY INTO my_tableFROM @my_s3_stageFILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = ',' SKIP_HEADER = 1);
This pulls the data from all files in the S3 folder into your Snowflake table. You can then validate the data to ensure a successful migration.
Step 4: Use Snowpipe for near-real-time ingestion
The COPY INTO method is designed to batch load your data. If you want to configure near-real-time loading, here’s how to use Snowpipe to make it happen.
Once you have created an external stage in Snowflake, create a pipe using the CREATE PIPE command. This tells Snowpipe how to load files from the ingestion queue into the target table using a COPY INTO statement.
Set AUTO_INGEST = TRUE so Snowpipe automatically detects new files via S3 event notifications and loads them as they arrive.
CREATE PIPE snowpipe_db.public.mypipe AUTO_INGEST = TRUE AS COPY INTO snowpipe_db.public.mytable FROM @snowpipe_db.public.mystage FILE_FORMAT = (type = 'CSV’ SKIP_HEADER = 1);
Note: For AUTO_INGEST to work, you must configure S3 Event Notifications to send object creation events to Snowpipe’s SQS queue. Run SHOW PIPES for the notification_channel (SQS ARN), then configure your S3 bucket to send events to that queue.
This sets up your pipeline for continuous loading rather than a one-time data ingestion.
Limitations of this method:
- Not ideal for users unfamiliar with IAM policies and Snowflake configuration.
- Snowpipe reduces manual execution but does not eliminate export or schema management.
- Monitoring, retries, and error handling require custom implementation.
- Becomes difficult to maintain as data volume and table count grow.
Method 3: Using Snowflake-native connector like OpenFlow
Snowflake also offers native connectors, like OpenFlow, that help you ingest data directly from SQL Server into Snowflake without third-party ETL tools. It handles incremental replication using Change Tracking at the source and supports limited schema evolution.
Prerequisites and architectural considerations
- Change Tracking is enabled on the SQL Server databases and tables you plan to replicate.
- Standard or BYOC Snowflake deployment with domains and connectivity configured.
- Administrative access to both SQL Server and Snowflake for user and role configuration.
Step 1: Prepare SQL Server
First, enable Change Tracking on each database and table using this code. It ensures OpenFlow can detect incremental changes.
ALTER DATABASE <database> SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);ALTER TABLE <schema>.<table> ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON);
Now, create a login and user, then grant the SELECT and VIEW CHANGE TRACKING permissions to a new SQL Server login.
GRANT SELECT ON <database>.<schema>.<table> TO <user_name>;GRANT VIEW CHANGE TRACKING ON <database>.<schema>.<table> TO <user_name>;
Step 2: Set up your Snowflake environment
Perform the following steps as a Snowflake administrator.
Create a destination database in Snowflake for the replicated data. Then, create a service user and a role that has usage and object creation privileges.
CREATE ROLE <openflow_role>;GRANT ROLE <openflow_role> TO USER <openflow_user>;GRANT USAGE ON DATABASE <destination_database> TO ROLE <openflow_role>;GRANT CREATE SCHEMA ON DATABASE <destination_database> TO ROLE <openflow_role>;
Lastly, assign public and private keys for authentication if using key-pair authentication.
ALTER USER <openflow_user> SET RSA_PUBLIC_KEY = 'thekey';
This provides OpenFlow with the necessary access to write and manage replicated objects.
Step 3: Install and configure the OpenFlow connector
- Go to the OpenFlow overview page and select View more connectors.
- Choose Add to runtime for the SQL Server connector.
- Select your OpenFlow runtime and authenticate with Snowflake credentials.
- Once installed, the connector appears in the OpenFlow canvas.
- Right-click the connector process group, select Parameters, and add the following details:
- SQL Server JDBC URL, username, and password.
- Snowflake destination database, authentication strategy, warehouse, role, and user.
- Any ingestion parameters, including tables to include in replication.
- After saving parameters, enable and start the connector to begin replication.
OpenFlow only begins replicating data into Snowflake once you explicitly add tables to its ingestion configuration.
Limitations of this method:
- Tables must have a primary key or a supported unique key for incremental replication.
- Certain schema changes, like changing primary keys, are not supported natively.
- The connector does not support truncated tables or re-adding dropped columns in all cases.
Method 4: Linked server with ODBC driver
This method helps you connect SQL Server to Snowflake using an ODBC driver and SQL Server’s Linked Server feature. It enables querying Snowflake data directly from SQL Server for ad-hoc access and small-scale operations.
Important: This method is designed for querying Snowflake FROM SQL Server, not for bulk data migration INTO Snowflake. Performance is significantly slower than native methods and is not recommended for loading large data volumes.
Prerequisites and architectural considerations
- Snowflake ODBC driver (64-bit) installed on the SQL Server machine
- SQL Server instance with sysadmin privileges
- Snowflake account credentials (username, password, account identifier)
- Network connectivity between SQL Server and Snowflake
Step 1: Install the Snowflake ODBC Driver
- Download the Snowflake ODBC Driver (64-bit) from the Snowflake Developer Downloads page.
- Run the installer and follow the setup wizard.
- Verify installation:
- Open ODBC Data Sources (64-bit) from the Windows Start menu.
- Navigate to the Drivers tab.
- Confirm SnowflakeDSIIDriver appears in the list.
Step 2: Create a system DSN for Snowflake
- Open ODBC Data Sources (64-bit).
- Select the System DSN tab.
- Click Add.
- Select SnowflakeDSIIDriver from the list.
- In the Snowflake Configuration dialog, enter:
- Data Source: Name for this DSN
- User: Your Snowflake username
- Server: Your full account URL
- Database
- Schema
- Warehouse
- Authenticator: Leave as snowflake (default) for authentication
- Click Ok to save the DSN. You don’t need to enter the password here.
Step 3: Configure the MSDASQL provider in SQL Server
- Open SQL Server Management Studio (SSMS).
- Connect to your SQL Server instance.
- In Object Explorer, navigate to Server Objects → Providers.
- Right-click MSDASQL and select Properties.
- In the Provider Options dialog, enable:
- Allow in-process.
- Dynamic parameter.
- Click Ok.
Step 4: Create the Linked Server
- In SSMS, go to Object Explorer, then Server Objects.
- Right-click Linked Servers and select New Linked Server.
- In the General page, these fields are crucial:
- Linked server: Enter a name.
- Server type: Select Other data source.
- Provider: Microsoft OLE DB Provider for ODBC Drivers.
- Provider string: DSN=<your_system_dsn_name>
- Data source: Leave blank.
- In the Security Tab,
- Select ‘Be made using this security context’.
- Remote login: Your Snowflake username.
- Password: Your Snowflake password.
- Click Ok to create the linked server.
Step 5: Test the linked server connection
Run a simple test query to verify connectivity.
SELECT * FROM OPENQUERY(SNOWFLAKE_LINK, 'SELECT * FROM MY_DATABASE.PUBLIC.MY_TABLE LIMIT 10');
If successful, you should see results from Snowflake.
Limitations of this method
- Queries can be significantly slower than native Snowflake methods due to network overhead and row-by-row data transfer.
- Schema changes require manual updates to the linked server configuration.
- Cannot track incremental changes.
- Consumes SQL Server memory and Snowflake compute credits during queries.
What is MS SQL Server?
Microsoft SQL Server is a relational database management system (RDBMS) used to store and manage operational data for applications, reporting, and internal processes. It uses Transact-SQL (T-SQL) for data manipulation and defines how applications interact with stored information.
Modern versions support hybrid deployments, which allow databases to run across on-premises infrastructure and cloud platforms such as Microsoft Azure. As data volumes increase, SQL Server typically serves as a source system for analytics platforms rather than the primary engine for large-scale analytical workloads.
What is Snowflake?
Snowflake is a cloud-native data platform built for large-scale analytics, reporting, and data sharing. It uses a multi-cluster architecture that separates storage from compute, allowing you to scale resources independently without disrupting active workloads.
Snowflake runs fully managed on public cloud infrastructure such as Amazon Web Services, Microsoft Azure, and Google Cloud. You can store and analyze structured and semi-structured data in a single environment, which makes it well-suited for complex analytics and machine learning use cases.
Why Move Data from SQL Server to Snowflake?
Organizations often migrate to Snowflake to turn operational SQL Server data into a dynamic analytical asset. Here’s why this migration is beneficial.
- Centralized analytics and data warehousing: SQL Server handles transactions well, but analytics often become fragmented across replicas or exports. Snowflake provides a centralized warehouse purpose-built for analytics, reporting, and downstream consumption.
- Scalability and performance: SQL Server often requires upfront sizing and manual scaling as workloads grow. Snowflake enables you to scale analytics workloads without impacting source systems or other users.
- Combining SQL Server data with other sources: Modern analytics rarely rely on a single system. Snowflake makes it easier to connect SQL Server data to SaaS tools, event streams, and semi-structured sources, such as JSON or Parquet.
- Cloud cost efficiency for analytical workloads: Unlike SQL Server, Snowflake ETL uses predictable usage-based pricing and workload isolation to reduce costs for large reporting, exploration, and batch analytics workloads.
- Support for modern BI and ML tools: Snowflake integrates with modern BI platforms and machine learning workflows. This helps you run advanced analytics, dashboards, and models directly on fresh data without complex data movement.
How to Choose the Right Method
Data volume and frequency
High-volume environments with millions of rows require automated streaming or high-throughput batch loads. Manual exports are sufficient for small datasets or one-time migrations, but become labor-intensive as data grows.
Assess whether your data requires incremental updates or periodic full loads and choose a method accordingly.
Need for real-time vs batch
Operational dashboards and AI models often require real-time data synchronization. If you need up-to-the-minute insights, choose the automated ETL tool method that supports Change Data Capture (CDC) or configure it manually.
Batch processing is more appropriate for end-of-day financial reporting, where a slight delay in data availability does not impact decision-making.
Engineering resources and expertise
Custom-built pipelines and Snowflake-native migrations require deep engineering knowledge of T-SQL and cloud architecture. If your team lacks these specialized skills, a no-code automated tool provides the fastest time-to-value.
Managed services remove the need to configure storage integrations, drivers, and retry logic manually.
Monitoring and observability requirements
Production-grade pipelines require clear visibility into data health. If you go with manual scripts and ODBC connections, you are responsible for configuring custom monitoring solutions to detect data gaps or pipeline failures.
However, automated platforms offer these features through built-in dashboards and real-time alerts to track every record.
Budget and long-term maintenance
Manual methods have low licensing fees but high long-term maintenance costs in developer hours. Ensure that you evaluate the total cost of ownership beyond the initial setup.
Snowflake-native tools and automated platforms involve upfront costs but reduce the future burden on your IT staff by automating schema changes and performance tuning.
Additional Resources on SQL Server to Snowflake Migration
- Explore more about Loading Data to Snowflake
- Bucket Data in SQL
- Java Connect to SQL Server
- Ms Access to SQL Server
Find the Right Migration Path
This article walked you through how to migrate data from SQL Server to Snowflake, covering four step-by-step methods to help you easily connect the two platforms.
Each method comes with clear trade-offs across scalability, complexity, maintenance effort, and reliability. While manual and semi-automated approaches offer control, they also increase operational overhead through scripting, monitoring, and error handling.
Automation makes a meaningful difference here. A fully managed platform like Hevo eliminates brittle pipelines, handles schema changes automatically, and provides transparent pricing and monitoring with real-time syncs.
If you’re looking for a faster and more straightforward way to migrate, Hevo offers a simple, reliable, and transparent solution.
Want to consult with an expert? Book a free demo today!
FAQs
Q1. What is the best way to connect SQL Server to Snowflake?
One of the most practical ways to connect SQL Server to Snowflake is by using a no-code ETL tool, like Hevo Data. Hevo automates data extraction, transformation, and loading in real-time without a single line of code. It’s ideal for teams that want a reliable and scalable migration setup.
Q2. Can I manually migrate data from SQL Server to Snowflake?
Yes. You can export data from SQL Server, upload it to cloud storage, and then load it into Snowflake using COPY INTO or Snowpipe. Tools like SnowSQL are commonly used to run these commands, but the process requires manual setup and ongoing maintenance.
Q3. How long does it take to move data from SQL Server to Snowflake?
Migration time depends on data volume, network speed, and transfer method. With ETL tools like Hevo Data, most users can set up their SQL Server to Snowflake pipeline in under 15 minutes, with continuous real-time sync afterward.
Q4. Can I automate real-time data transfer from SQL Server to Snowflake?
Yes. Platforms like Hevo support real-time data replication using Change Data Capture (CDC). This means every update in SQL Server is reflected in Snowflake within seconds, and your analytics and dashboards always run on the latest data.
Q5. What are the limitations of manual SQL Server to Snowflake migration?
Manual methods involve multiple steps, exporting CSVs, managing IAM roles, and running COPY commands. These approaches lack real-time sync, error recovery, and schema drift management, making them difficult to scale or maintain for production environments.