This guide shows three approaches to migrating data from Snowflake to Redshift:
- Method 1: Manual S3 staging
Use Snowflake’s COPY INTO to export data to S3, then Redshift’s COPY to load it. This uses native parallel processing of both tools.
- Method 2: AWS Glue automation
Deploy a CloudFormation template that provisions infrastructure and orchestrates the migration workflow automatically.
- Method 3: No-code ETL or ELT platform like Hevo
Sync data from 150+ sources into Redshift or Snowflake with automatic schema mapping, scalability, and enterprise-grade reliability within minutes.
Businesses today are rapidly shifting to cloud-based data warehouses because they offer effortless scalability, flexible pricing, and the ability to handle peak workloads without manual intervention. Cloud data storage and warehousing solutions also simplify data availability and security through built-in replication, backups, and disaster recovery.
Among the most popular choices are Amazon Redshift and Snowflake. Redshift is highly valued for its deep integration with the AWS ecosystem and its cost-effectiveness for steady, long-term workloads.
Snowflake is recognized for its architecture, which enables independent scaling of compute and storage, as well as automatic maintenance. This helps teams optimize costs more granularly for variable workloads.
Because of these differences, many teams eventually decide to migrate between the two. This article provides an in-depth look at a smooth Snowflake to Redshift migration.
What Is Snowflake?
Snowflake is a cloud-based data warehouse platform that separates compute and storage for independent scaling of each component. It is built from the ground up for the cloud and runs on major cloud providers, such as AWS, Azure, and Google Cloud Platform.
Snowflake’s architecture enables multiple users and workloads to access the same data simultaneously while maintaining performance. This makes it particularly suitable for organizations with diverse data analysis needs and varying workload patterns.
Key features
- Zero-copy cloning: Offers instant creation of complete table, schema, or database copies without duplicating storage, which allows teams to test or develop ETL workflows safely.
- Semi-structured data support: Provides native ingestion and querying of formats, such as JSON, Parquet, Avro, and ORC.
- Data sharing: Enables immediate and governed sharing of live datasets across Snowflake accounts, even across regions or clouds, through secure direct access.
- Time travel: Gives access to historical versions of data for a defined period, helping you recover from integration errors or accidental overwrites with minimal effort.
- Automatic optimization: Applies performance enhancements, such as micro-partition maintenance and clustering updates behind the scenes, to reduce manual management during ingestion.
What Is Amazon Redshift?
Amazon Redshift is a fully managed petabyte-scale data warehouse service in the AWS cloud, which enables you to analyze large datasets using standard SQL and business intelligence tools. It delivers fast query performance through columnar storage and parallel query execution.
Redshift integrates with the broader AWS ecosystem, including services like Amazon S3, Amazon Athena, and AWS Lake Formation, which makes it a natural choice for organizations already invested in AWS infrastructure.
Key features
- Serverless option: Delivers automatic provisioning and scaling so you can run analytics instantly and avoid cluster setup or manual maintenance.
- AQUA acceleration: Boosts query performance through a hardware-accelerated caching layer that moves computation closer to the data.
- Redshift ML: Lets you build, train, and deploy machine learning models with simple SQL statements while keeping all data in Redshift.
- Concurrency scaling: Adds temporary processing capacity during peak workloads to keep dashboards and analytical queries responsive.
- Cross-region sharing: Grants direct access to live Redshift data across AWS regions without copying or recreating datasets.
Why Migrate From Snowflake to AWS Redshift?
While both platforms offer powerful capabilities, Redshift provides distinct advantages for specific use cases. Here are some key factors that motivate people to migrate from Snowflake to Amazon Redshift.
Cost optimization and predictability
Amazon Redshift delivers better cost control for stable and high-volume workloads through Reserved Instance (RI) commitments. These multi-year pricing plans offer significant discounts and provide finance teams with a predictable budgeting model.
While Snowflake’s consumption-based pricing offers flexibility for variable workloads, it can lead to unpredictable spend when compute clusters aren’t carefully monitored or when running long-duration queries. For organizations with large, predictable datasets and consistent usage patterns, Redshift’s pricing structure often proves more economical.
Deep AWS ecosystem integration
For AWS-native organizations, Redshift’s connectivity eliminates architectural complexity. You can query data stored in Amazon S3 using Redshift Spectrum without moving it. Its native integration with analytical tools like AWS Glue and AWS Lake Formation eliminates the security, latency, and egress costs associated with external connectors or multi-cloud architectures.
Advanced analytics and ML capabilities
Redshift offers an integrated platform for data science and advanced analytics. You build, train, and deploy machine learning models directly using Redshift ML and standard SQL. This keeps data secure in the warehouse and removes the need for separate infrastructure. Its tight integration with Amazon SageMaker accelerates the data science lifecycle, eliminating engineering overhead and reducing the time between data preparation and model deployment.
How to Migrate Data from Snowflake to Redshift: Step-by-Step
You can use three methods to migrate data from Snowflake to Redshift.
- Method 1: Manual S3 staging method with COPY INTO and COPY.
- Method 2: Automated migration using AWS Glue and AWS Schema Conversion Tool (AWS SCT).
- Method 3: No-code ETL or ELT tool like Hevo.
Method 1: Manual S3 staging method
This method uses Snowflake’s native COPY INTO command to export data to Amazon S3, followed by Redshift’s COPY command to load it. It uses native parallel processing for fast, large-scale migrations.
Prerequisites for migrating data from Snowflake to Redshift
- An Amazon S3 staging bucket to temporarily hold data during migration. For example, s3://snowflake-redshift-migration/.
- Redshift cluster with an attached IAM role for S3 access.
- Snowflake schema converted to Redshift-compatible format.
Step 1: Extract or unload data from Snowflake to S3
Configure an external stage that points to your S3 bucket using a storage integration with write access.
Note: stage_name must be the stage name, like my_s3_stage, not the S3 URL.
CREATE STAGE my_s3_stage
STORAGE_INTEGRATION = s3_int
URL = 's3://your-bucket-name/'
FILE_FORMAT = csv_format;
If you’re migrating many tables, the following stored procedure helps generate COPY INTO commands automatically. For a small number of tables, you can run COPY INTO manually.
CREATE OR REPLACE PROCEDURE generate_copy(db_name VARCHAR, schema_name VARCHAR, stage_name VARCHAR)
returns varchar not null
language javascript
as
$$
var return_value = "";
var sql_query = "select table_catalog, table_schema, lower(table_name) as table_name from " + db_name + ".information_schema.tables where table_schema = '" + schema_name + "'" ;
var sql_statement = snowflake.createStatement(
{
sqlText: sql_query
}
);
/* Creates result set */
var result_scan = sql_statement.execute();
while (result_scan.next()) {
return_value += "\n";
return_value += "COPY INTO @"
return_value += stage_name
return_value += "/"
return_value += result_scan.getColumnValue(3);
return_value += "/"
return_value += "\n";
return_value += "FROM ";
return_value += result_scan.getColumnValue(1);
return_value += "." + result_scan.getColumnValue(2);
return_value += "." + result_scan.getColumnValue(3);
return_value += "\n";
return_value += "FILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = '|' COMPRESSION = GZIP)";
return_value += "\n";
return_value += "OVERWRITE = TRUE;"
return_value += "\n";
}
return return_value;
$$
;
Snowflake automatically parallelizes COPY INTO by scanning micro-partitions in parallel, so each unload command scales efficiently even when executed as a single statement.
Step 2: Import data from S3 to Redshift
Before loading the data, ensure the schema is compatible with Redshift. Pay special attention to complex data types such as VARIANT, ARRAY, and GEOGRAPHY.
Now you can load your Snowflake data into Redshift.
The following query generates COPY command templates that include placeholders for your IAM role and S3 path.
CREATE OR REPLACE VIEW v_generate_copy
AS
SELECT
schemaname ,
tablename ,
seq ,
ddl
FROM
(
SELECT
table_id ,
schemaname ,
tablename ,
seq ,
ddl
FROM
(
--COPY TABLE
SELECT
c.oid::bigint as table_id ,
n.nspname AS schemaname ,
c.relname AS tablename ,
0 AS seq ,
'COPY ' + n.nspname + '.' + c.relname + ' FROM ' AS ddl
FROM
pg_namespace AS n
INNER JOIN
pg_class AS c
ON
n.oid = c.relnamespace
WHERE
c.relkind = 'r'
--COPY TABLE continued
UNION
SELECT
c.oid::bigint as table_id ,
n.nspname AS schemaname ,
c.relname AS tablename ,
2 AS seq ,
'''${' + '2}' + c.relname + '/'' iam_role ''${' + '1}'' gzip delimiter ''|'' EMPTYASNULL REGION '${3}''' AS ddl
FROM
pg_namespace AS n
INNER JOIN
pg_class AS c
ON
n.oid = c.relnamespace
WHERE
c.relkind = 'r'
--END SEMICOLON
UNION
SELECT
c.oid::bigint as table_id ,
n.nspname AS schemaname,
c.relname AS tablename ,
600000005 AS seq ,
';' AS ddl
FROM
pg_namespace AS n
INNER JOIN
pg_class AS c
ON
n.oid = c.relnamespace
WHERE
c.relkind = 'r'
)
ORDER BY
table_id ,
schemaname,
tablename ,
seq
);
SELECT ddl
FROM v_generate_copy
WHERE schemaname = 'your_schema';
This queries Redshift’s system catalogs to get all tables in your database, then constructs COPY commands for each table.
The COPY commands generated by this view include placeholders that you need to replace, like ${1} for your IAM role ARN, ${2} for your S3 bucket name, and ${3} for your AWS region. However, if you have configured a default IAM role, you can use IAM_ROLE DEFAULT or omit the clause.
After loading data, verify the migration by comparing row counts between Snowflake and Redshift.
In Amazon Redshift, go to the console, launch the Query Editor v2, and run the following query:
SELECT
tab.table_schema,
tab.table_name,
NVL(tinf.tbl_rows, 0) as tbl_rows,
NVL(tinf.size, 0) as size
FROM svv_tables tab
LEFT JOIN svv_table_info tinf
ON tab.table_schema = tinf.schema
AND tab.table_name = tinf."table"
WHERE tab.table_type = 'BASE TABLE'
AND tab.table_schema IN ('your_schema')
ORDER BY tbl_rows;
Run the following query to compare and validate the data in Snowflake.
USE DATABASE your_database;
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
ROW_COUNT,
BYTES AS SIZE,
COMMENT
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'YOUR_SCHEMA'
ORDER BY ROW_COUNT;
Once you have confirmed the row counts, your data has been successfully migrated from Snowflake to Redshift.
Method 2: Automated migration using AWS Glue and AWS SCT
This method automates the entire migration process using AWS Schema Conversion Tool (AWS SCT) for schema conversion and AWS Glue workflows for data transfer.
AWS Glue uses a CloudFormation template that provisions all required resources, including S3 buckets, IAM roles, AWS Glue jobs, and supporting Redshift resources.
Prerequisites
- AWS account with admin permissions and AWS SCT installed.
- Snowflake account with database credentials.
- JDBC drivers for Snowflake and Redshift.
Step 1: Convert schema using AWS SCT
AWS SCT automatically converts Snowflake schemas, tables, views, and stored procedures to Redshift format.
- Launch the AWS SCT application.
- Click File and go to New project wizard.
- Enter your project name and location.
- For Source engine, select Snowflake and click Next.
- Enter your Snowflake connection details.
- Click Test Connection and Next.
- Expand SNOWFLAKE_SAMPLE_DATA and Schemas.
- Select the schemas you want to migrate and click Next.
- Review the assessment report generated by AWS SCT, which identifies objects that can be converted automatically and those that require manual intervention.
- Click Next.
This is the time to connect to Redshift.
- Enter your Redshift connection details like server name, database name, etc.
- Deselect Use AWS Glue for now, as you’ll configure it separately in Step 2 for data migration.
- Click Test Connection and then Finish.
AWS SCT converts the schema and displays it in the right pane. Objects are marked with indicators:
- Check mark: Successfully converted.
- Red warning sign: Converted with automatic adjustments.
Review the converted schema and optimize if needed, and select Apply to database. Your Redshift cluster now has the converted schema ready for data loading.
Step 2: Migrate data using AWS Glue
AWS provides a ready-to-use CloudFormation template, which runs lightweight Python scripts to orchestrate Snowflake COPY INTO and Redshift COPY commands through Amazon S3.
- Sign in to the AWS Management Console and select your target Region.
- Upload a new stack using the Snowflake-to-Redshift migration template available in the AWS documentation, and choose Launch Stack.
- Click Next.
- Enter a stack name.
- In the Parameters section:
- Provide Snowflake configuration details for account, warehouse, database, and schema.
- Provide Redshift configuration details like database name, node type, number of nodes, port, and password.
- Provide values for S3 staging and network settings with bucket prefix, VPC, and subnet CIDR block.
- Click Next.
- Review the stack details and click Create stack.
After the stack is created, Snowflake must be granted access to the S3 bucket provisioned for staging data.
Create an IAM role and an external Amazon S3 stage to allow Snowflake to unload data into the S3 bucket created by the CloudFormation stack. Use the TargetDataS3Bucket value from the stack’s Outputs tab as the S3 URL, and name the stage unload_to_s3 if you’re using the default scripts.
Once the infrastructure and schema are in place, you can run the data migration.
- Click on Workflows in the AWS Glue console.
- Select the workflow created by the stack.
- Select Run from the Actions menu.
After the workflow completes, verify the data as we did at the end of the manual method using the Amazon Redshift query editor v2.
Want a no-code way to migrate data to cloud data warehouses like Snowflake and Redshift? Sign up for Hevo and get started in minutes!
Method 3: No-code ETL or ELT tool like Hevo
ETL and ELT tools simplify Snowflake to Redshift migrations by removing the need to build and maintain custom pipelines.
Some platforms support direct Snowflake-to-Redshift migration, while others, like Hevo Data, connect to your upstream data sources and stream data directly into Redshift, which replaces your Snowflake pipelines entirely.
Hevo is an ideal solution for complex cloud data warehouse migrations. With 150+ fully managed integrations, it enables real-time data syncing from databases, SaaS tools, and event streams to destinations like Snowflake and Amazon Redshift.
Its fault-tolerant architecture ensures minimal data loss, auto-retries, and guaranteed delivery, even during high-volume migrations. Hevo also offers SQL and Python support for code flexibility. Its auto-scalable pipelines help you expand easily as your data grows.
Starting at just $239/month, it’s a reliable solution for scalable and high-performing pipelines regardless of your technical expertise or business size.
Try Hevo’s 14-day free trial now!
Next Steps for Snowflake and Redshift Data Migration
Now you know how to conduct Snowflake to Redshift migration manually and by using AWS’s automated methods.
While effective, both these approaches require close attention to file formats, IAM roles, performance tuning, and validation. Maintaining these pipelines can be resource-intensive and error-prone for teams without deep AWS or Snowflake expertise.
Hence, many businesses opt for no-code options like Hevo. Hevo lets you move data from your operational databases and SaaS tools to destinations such as Snowflake or Redshift in minutes.
With automatic schema mapping, enterprise-grade security, error handling, and data enrichment, your data is analysis-ready with far less operational overhead.
Want to explore possibilities with Hevo? Book a free demo with an expert today!
FAQs
Q1. Can Snowflake connect to Redshift?
No, Snowflake cannot directly connect to Redshift. To transfer data between Snowflake and Redshift, you would typically need an intermediary like S3 or an ETL tool like Hevo.
Q2. How to migrate data from Snowflake to AWS?
You can migrate data from Snowflake to Redshift using three methods.
1. Manually staging data in Amazon S3 with COPY INTO and COPY commands.
2. Automate migration with AWS Glue and CloudFormation.
3. Use a no-code ETL or ELT platform like Hevo to stream data with minimal setup.
Q3. Which is faster, Snowflake or Redshift?
Performance depends on workload type, data size, and configuration. Redshift excels at sustained and large-scale analytical queries with AWS integrations, while Snowflake performs well with variable concurrency and semi-structured data.