Summary IconQuick Takeaway

There are three simple ways to move data from PostgreSQL to Redshift. The best method depends on your team’s needs and how often you plan to run the migration:

  1. Hevo Data (No-Code)

Best for teams looking for an easy, automated, real-time solution. No coding needed. Just connect your source and destination, and Hevo takes care of the rest.

  1. AWS DMS

Ideal if you’re already on AWS and want a managed solution that handles both full and real-time data loads with less manual work.

  1. Manual using CSV and the COPY command

Great for one-time or occasional migrations when you need control and don’t mind doing it manually. Be ready to invest time and effort.

Are you tired of locally storing and managing files on your Postgres server? You can move your precious data to a powerful destination such as Amazon Redshift, and that too within minutes.

Data engineers are given the task of moving data between storage systems like applications, databases, data warehouses, and data lakes. This can be exhaustive and cumbersome. You can follow this simple step-by-step approach to transfer your data from PostgreSQL to Redshift so that you don’t have any problems with your data migration journey.

What is PostgreSQL?

PostgreSQL Thumbnail

PostgreSQL is a powerful open-source relational database known for its flexibility, reliability, and strong support for modern data types. It’s widely used across industries and works seamlessly on Linux, Windows, and macOS.

Key Features of PostgreSQL

  • Security-First Design: Includes advanced access controls and supports major security standards like LDAP and GSSAPI.
  • Flexible & Developer-Friendly: Supports complex data types and offers full control for custom database setups.
  • Open-Source & Cross-Platform: Free to use and runs smoothly on all major operating systems.
  • Trusted by Top Companies: Used by Apple, Spotify, Facebook, and more for everyday data operations.
  • Reliable & Fault-Tolerant: Features like write-ahead logging ensure data integrity and high availability.

What is Amazon Redshift?

PostgreSQL to Redshift: Redshift logo

Amazon Redshift is a fully managed cloud data warehouse built for fast and scalable analytics. It uses powerful parallel processing to handle massive data sets and complex queries with ease.

Key Features of Amazon Redshift

  • Managed & Cost-Efficient: Redshift handles setup, scaling, and maintenance automatically, with flexible pricing so you only pay for what you use.
  • Highly Scalable: Easily scales from gigabytes to petabytes, keeping performance fast as your data grows.
  • Optimized for Analytics: Built for OLAP workloads, Redshift runs complex queries quickly on large datasets.
  • AWS Ecosystem Integration: Seamlessly connects with services like S3, Glue, and EC2 for streamlined data workflows.
  • Serverless, Sharing & Security: Supports serverless queries, secure data sharing, and includes built-in encryption, compression, and high availability.

Why Replicate Data from Postgres to Redshift?

  • Analytics: Postgres is a powerful and flexible database, but it’s probably not the best choice for analyzing large volumes of data quickly. Redshift is a columnar database that supports massive analytics workloads.
  • Scalability: Redshift can quickly scale without any performance problems, whereas Postgres may not efficiently handle massive datasets.
  • OLTP and OLAP: Redshift is designed for Online Analytical Processing (OLAP), making it ideal for complex queries and data analysis. Whereas, Postgres is an Online Transactional Processing (OLTP) database optimized for transactional data and real-time operations.

Methods to Connect or Move PostgreSQL to Redshift

Skip the Manual Hassle – Move Postgres Data to Redshift in Minutes with Hevo

Still exporting CSVs, uploading to S3, and writing custom scripts? Hevo’s no-code platform automates your Postgres to Redshift migration; no manual steps, no data delays.

  • 150+ Integrations: Bring in data from Postgres and beyond – quickly and reliably.
  • Analytics-Ready Data: Hevo transforms your data to fit Redshift’s schema perfectly.
  • Real-Time Pipelines: Keep your Redshift updated without ever writing a script.

Join 2,000+ customers that trust Hevo for hassle-free data flows. See how Meru saved 70% on costs and made insights 4x faster with Hevo.

Move PostgreSQL Data for Free with Hevo

Method 1: Using Hevo’s Automated Data Pipelines to Connect PostgreSQL to Redshift

1. Prerequisites

Make sure you have:

  • IP address/hostname and credentials for your PostgreSQL server (version 9.5 or above).
  • An active Amazon Redshift account with database credentials.
  • A registered Hevo Data account.
  • Assign the necessary PostgreSQL privileges (SELECT, USAGE, CONNECT) to the user.
  • Ensure you have appropriate admin rights in Hevo.
  • For real-time sync, enable log-based incremental replication (WAL/logical replication) in PostgreSQL.

2. Prepare PostgreSQL for Hevo Integration

  • Enable log-based incremental replication by updating your PostgreSQL config files (set wal_level to logical, configure max_replication_slots, and set the right permissions in pg_hba.conf).
  • Whitelist Hevo’s IP addresses in the PostgreSQL server to allow Hevo connections.
  • (Optional) Create a dedicated database user for Hevo, then grant privileges with:

sql

GRANT CONNECT ON DATABASE <database_name> TO <user>;

GRANT USAGE ON SCHEMA <schema_name> TO <user>;

GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <user>;

ALTER DEFAULT PRIVILEGES IN SCHEMA <schema_name> GRANT SELECT ON TABLES TO <user>;

3. Set Up PostgreSQL as a Source in Hevo

  • Log in to Hevo, go to Pipelines, and click Create Pipeline.
  • Select PostgreSQL as the source and fill in all required connection details (host, port, user, password, database, schema).
Set Up PostgreSQL as a Source in Hevo
  • Choose your preferred ingestion mode (logical replication recommended for most cases) and test the connection.
Choose your preferred ingestion mode
  • Select the tables or objects to replicate.

4. Prepare Amazon Redshift for Hevo Integration

  • In the AWS Redshift console, go to your cluster’s VPC Security Group settings.
Prepare Amazon Redshift for Hevo Integration
Prepare Amazon Redshift for Hevo Integration
  • Add inbound rules for each required Hevo IP:
  • Choose Type: Redshift
Add inbound rules for each required Hevo IP
  • Source: Custom, and enter the required Hevo IP addresses for your region
  • (Optional) Create a new Redshift user for Hevo, and then grant required privileges (run as a superuser):

sql

GRANT CREATE ON DATABASE <database_name> TO hevo;

GRANT CREATE, USAGE ON SCHEMA <schema_name> TO <user>;

GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO hevo;
Add inbound rules for each required Hevo IP

5. Configure Amazon Redshift as Destination and Start Migration

  • In Hevo, click Destinations and then + Create Destination.
  • Select Amazon Redshift and enter your cluster details:
  • Connection string (or hostname, port, database, user, password, schema)
Configure Amazon Redshift as Destination
  • Configure optional settings such as table name prefixes or schema mapping.
  • Test the connection to confirm.
  • Finalize your pipeline settings in Hevo and activate it.

Data will now be automatically and securely replicated from PostgreSQL to Amazon Redshift, with Hevo handling schema mapping, transformations, and real-time sync.

For detailed steps, check out this blog on migrating data from PostgreSQL to Amazon Redshift via AWS DMS

Method 2: Connecting Postgres to Redshift Manually

Prerequisites:

Step 1: Configure PostgreSQL to export data as CSV

Step 1. a) Go to the directory where PostgreSQL is installed.

Postgres Directory

Step 1. b) Open Command Prompt from that file location.

Command Prompt

Step 1. c) Now, we need to enter PostgreSQL. To do so, use the command:

psql -U postgres
Enter into Postgres

Step 1. d) To see the list of databases, you can use the command:

\l
List of Databases

I have already created a database named productsdb here. We will be exporting tables from this database.

This is the table I will be exporting.

Products Tbale

Step 1. e) To export as .csv, use the following command:

\copy products TO '<your_file_location><your_file_name>.csv' DELIMITER ',' CSV HEADER;

Note: This will create a new file at the mentioned location.

Go to your file location to see the saved CSV file.

products csv file

Step 2: Load CSV to S3 Bucket

Step 2. a) Log Into your AWS Console and select S3.

AWS Console

Step 2. b) Now, we need to create a new bucket and upload our local CSV file to it.

You can click Create Bucket to create a new bucket.

Bucket Creation AWS

Step 2. c) Fill in the bucket name and required details.

Load Data from PostgreSQL to Redshift
Load Data from PostgreSQL to BigQuery
Load Data from PostgreSQL to Snowflake

Note: Uncheck Block Public Access

Block Public Access setting

Step 2. d) To upload your CSV file, go to the bucket you created.

General Purpose Buckets

Click on upload to upload the file to this bucket.

Upload file to bucket

You can now see the file you uploaded inside your bucket.

Uploaded File

Step 3: Move Data from S3 to Redshift

Step 3. a) Go to your AWS Console and select Amazon Redshift.

AWS Console

Step 3. b) For Redshift to load data from S3, it needs permission to read data from S3. To assign this permission to Redshift, we can create an IAM role for that and go to Security and Encryption.

Security and Encryption Tab

Click on Manage IAM roles followed by Create IAM role.

Manage IAM Roles

Note: I will select all s3 buckets. You can select specific buckets and give access to them. 

Click Create.

Create IAM role

Step 3. c) Go back to your Namespace and click on Query Data.

Namespace Configuration Window

Step 3. d) Click on Load Data to load data in your Namespace.

Query data window

Click on Browse S3 and select the required Bucket.

Load data configurations page
Browse for bucket window

Note: I don’t have a table created, so I will click Create a new table, and Redshift will automatically create a new table.

Load Data Configuration

Note: Select the IAM role you just created and click on Create.

Step 3. e) Click on Load Data.

Load Data

A Query will start that will load your data from S3 to Redshift.

Loading data process

Step 3. f) Run a Select Query to view your table.

Limitations of Using Custom ETL Scripts

These challenges have an impact on ensuring that you have consistent and accurate data available in your Redshift in near Real-Time.

  • The Custom ETL Script method works well only if you have to move data only once or in batches from PostgreSQL to Redshift Migration.
  • The Custom ETL Script method also fails when you have to move data in near real-time from PostgreSQL to Redshift.
  • A more optimal way is to move incremental data between two syncs from Postgres to Redshift instead of a full load. This method is called the Change Data Capture method.
  • When you write custom SQL scripts to extract a subset of data, often those scripts break as the source schema keeps changing or evolving.

Method 3: Using Hevo Data to connect PostgreSQL to Redshift

Prerequisites:

Step 1: Create a new Pipeline

Pipelines Overview

Step 2: Configure the Source details

Select Source Details

Step 2. a) Select the objects that you want to replicate.

Select Tables

Step 3: Configure the Destination details.

Select Destination

Step 3. a) Give your destination table a prefix name.

Note: Keep Schema mapping turned on. This feature by Hevo will automatically map your source table schema to your destination table.

Final Settings Hevo

Step 4: Your Pipeline is created, and your data will be replicated from PostgreSQL to Amazon Redshift.

Pipeline Overview

Amazon Redshift vs PostgreSQL: What’s the Difference?

Before moving your data, it’s important to understand how Amazon Redshift and PostgreSQL differ in terms of performance, scalability, and use cases. This comparison will help you choose the right tool for your analytics and storage needs.

For a detailed breakdown, check out this blog on Amazon Redshift vs PostgreSQL Comparison.

Additional Resources for PostgreSQL Integrations and Migrations

Conclusion

This article details two methods for migrating data from PostgreSQL to Redshift, providing comprehensive steps for each approach.

The manual ETL process described in the second method comes with various challenges and limitations. However, for those needing real-time data replication and a fully automated solution, Hevo stands out as the optimal choice. Let Hevo simplify your PostgreSQL to Redshift migration with a fully automated, no-code platform. Hevo lets you set up real-time data pipelines in just a few clicks. Start your 14-day free trial with Hevo today.

FAQ on PostgreSQL to Redshift

1. How to transfer data from Postgres to Redshift?

The following are the ways by which you can connect Postgres to Redshift
1. Manually, with the help of the command line and an S3 bucket
2. Using automated Data Integration Platforms like Hevo.

2. Is Redshift compatible with PostgreSQL?

Well, the good news is that Redshift is compatible with PostgreSQL. The slightly bad news, however, is that these two have several significant differences. These differences will impact how you design and develop your data warehouse and applications. For example, some features in PostgreSQL 9.0 have no support from Amazon Redshift.

3. Is Redshift faster than PostgreSQL?

Yes, Redshift works faster for OLAP operations and retrieves data faster than PostgreSQL.

4. How to connect to Redshift with psql?

You can connect to Redshift with psql in the following steps
1. First, install psql on your machine.
2. Next, use this command to connect to Redshift:
psql -h your-redshift-cluster-endpoint -p 5439 -U your-username -d your-database
3. It will prompt for the password. Enter your password, and you will be connected to Redshift.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite firsthand. Check out our transparent pricing to make an informed decision!

Share your understanding of PostgreSQL to Redshift migration in the comments section below!

Aashish
Freelance Technical Content Writer, Hevo Data

Aashish loves learning about data science and help businesses to solve problems through his content on data, software architecture, and integration.