Integrating PostgreSQL on Google Cloud SQL to Redshift is an essential step in unlocking the power of data for modern businesses.

  1. By centralizing data in Redshift, a fully managed data warehousing service that provides high-performance analytical capabilities, you can expedite the analysis of voluminous datasets. 
  2. This integration not only enhances data analysis capabilities but also streamlines data management, offering instant access to comprehensive datasets in real-time.
  3. It empowers you to extract actionable insights to improve your sales strategies and customer experiences. 

Methods to Connect PostgreSQL on Google Cloud SQL to Redshift

Method 1: Manually Load Data from PostgreSQL on Google Cloud SQL to Redshift

Step 1: Export Data into CSV Files

  • Go to the Google Cloud Console and sign in to your Google Cloud account.
  • From the navigation menu, select SQL under the Storage section.
  • Click on the name of your Google Cloud SQL PostgreSQL instance from the list.
  • On the PostgreSQL instance overview page, click on the Export tab.
  • Configure the Export options:
  • Select the Database that you want to export.
  • Choose CSV as the export format.
  • Specify the tables that you want to export. You can export all or select specific ones.
  • In the Export location, choose the Cloud Storage Bucket as the destination for your CSV files.
  • To initiate the export process, click on the Export button.
  • Google Cloud SQL will start exporting the data from your PostgreSQL database into CSV files and save them in the specified Cloud Storage bucket.
  • Now, the CSV files are temporarily stored in the Google Cloud Storage bucket before transferring them to the Amazon S3 bucket. The Amazon S3 bucket acts as a staging area where you can store and organize your data before loading it into the Redshift table.

Step 2: Upload CSV Files to Amazon S3

  • Under the Cloud Storage section, open the Google Cloud Storage bucket where the CSV files are placed.
  • Select the checkbox next to the CSV files and click on the download button next to the specific CSV files. These files will now be downloaded to your local machine.
  • Once the files are downloaded, you can upload CSV files from your local machine to your S3 bucket using AWS CLI or AWS console.
  • Using AWS CLI:
  1. Configure the AWS CLI and use the aws s3 cp command to upload files from the local machine to the Amazon S3 bucket.
aws s3 cp /path_to_local_folder/ s3://s3_bucket_name/folder_name

Replace path_to_local_folder with the path to the file or folder on your local machine that you want to copy and s3_bucket_name/folder_name with your S3 bucket and folder name.

  • Using AWS Console:
  1. Select the S3 bucket and open the folder where you want to upload the CSV files.
  2. Click on the Upload button and choose Add Files.
  1. Browse your local machine to select the CSV files you want to upload and configure the necessary settings.
  2. Click on Start Upload.
  • Once the transfer is complete, verify if the CSV files are correctly copied in your S3 bucket using the AWS Management console.

Step 3: Load Data into Redshift

  • In the AWS Redshift cluster, create or use the existing table. Ensure that the column names, data types, and constraints align with the PostgreSQL tables.
  • In Amazon Redshift, the COPY command is used to load data into Redshift tables from various sources. These sources can include Amazon S3, DynamoDB, or data files on your local machine. This command supports various data formats, including JSON, CSV, AVRO, Parquet, and more. 

Use the COPY command to load data from the S3 bucket to Amazon Redshift.

COPY redshift_table_name
FROM 's3://s3-bucket_path/data-prefix/' 
CREDENTIALS 'aws_access_key_id=ACCESS_KEY_ID;aws_secret_access_key=SECRET_ACCESS_KEY' 
CSV;

Replace redshift_table_name, S3 bucket name, and AWS credentials with the actual information. The CSV indicates that the data is in CSV format.

These steps complete the process of transferring data from PostgreSQL on Google Cloud SQL to Redshift using the CSV files, Google Cloud Console, COPY command, and AWS S3.

The above approach offers several advantages:

  • Ease of Use: The Google Cloud Console provides a user-friendly interface to export data in CSV readable format from Cloud SQL. It also eliminates the need for coding, simplifying the export process for a wide range of technical as well as non-technical users.
  • Infrequent Backups: The manual approach using CSV files is best suited for creating backups with smaller to moderate datasets, particularly for scenarios when you don’t need continuous data replication.

Limitations of using CSV Files, Google Cloud Console, and AWS S3 for PostgreSQL on Google Cloud SQL to Redshift Data Migration. 

  • Manual Intervention: The process of exporting Google CloudSQL PostgreSQL data into CSV and then transforming from S3 to Redshift requires manual interventions at several stages. This could be time-consuming, especially for larger datasets. It also requires continuous monitoring and management at each step during the data transfer process.
  • File Size Limitations: Google Cloud Storage supports a maximum single-object size upto 5 TB. If you try to upload files larger than this limit, the transfer for those files will fail. In such situations, you need to divide larger files into smaller segments to ensure successful uploads.

Method 2: Using a No-Code Tool like Hevo Data to Build PostgreSQL on Google Cloud SQL to Redshift ETL Pipeline

  • Using a no-code tool enables quick and accurate data integration from various sources, reducing the time required to set up data pipelines.
  • In addition, they eliminate the limitations mentioned in the above method, significantly reducing manual tasks and efforts. With streamlined data integrations with no-code tools, you can make informed decisions quickly.

Step 1: Specify Google Cloud PostgreSQL Connection Settings 

Step 2: Configure Redshift as a Destination 

Some of the notable features of Hevo Data:

  1. Intuitive Interface
  2. Drag-and-Drop Transformation
  3. Automated Schema Mapper
  4. Monitoring and Alerts

What can you Achieve from PostgreSQL on Google Cloud SQL Redshift Integration?

  • What are the overall sales trends over time, and which product or services generate the highest revenue following the trends?
  • What marketing campaigns relate to the increased sales?
  • Is there any correlation between customer demographics and purchasing habits?
  • Does the customer experience or feedback influence buying pattern?
  • What are the Key Performance Indicators (KPIs) for each team, and how they have improved over time?

Conclusion

  1. The integration can enhance your business’s data management needs.
  2. Although the manual approach offers certain benefits, such as simplicity for frequent backups and security, it does come with limitations. One of these is the lack of real-time synchronization due to manual interventions.
Tejaswini Kasture
Technical Content Writer, Hevo Data

Tejaswini is a passionate data science enthusiast and skilled writer dedicated to producing high-quality content on software architecture and data integration. Tejaswini's work reflects her deep understanding of complex data concepts, making them accessible to a wide audience. Her enthusiasm for data science drives her to explore innovative solutions and share valuable insights, helping professionals navigate the ever-evolving landscape of technology and data.

All your customer data in one place.