Is your MySQL server getting too slow for analytical queries now? Or are you looking to join data from another Database while running queries? Whichever your use case, it is a great decision to move the data from MySQL to Redshift for analytics.

This post covers the detailed steps you need to follow to migrate data from MySQL to Redshift. You will also get a brief overview of MySQL and Amazon Redshift. You will also explore the challenges involved in connecting MySQL to Redshift using custom ETL scripts. Let’s get started.

Prerequisites

You will have a much easier time understanding the ways for connecting MySQL to Redshift if you have gone through the following aspects:

  • An active MySQL account.
  • An active Amazon Redshift account.
  • Working knowledge of Databases and Data Warehouses.
  • Working knowledge of Database Management Systems.
  • Working knowledge of CSV files.

What is MySQL?

MySQL Logo
Image Source

MySQL is one of the world’s most known open-source Relational Database Management Systems. With great security, reliability, and ease of use, MySQL has emerged as the leading choice for OLTP systems. MySQL allows multiple users to access your Databases and runs queries using the SQL (Structured Query Language) language. MySQL RDBMS is frequently used in Linux distributions in conjunction with Apache and PHP Web Server.

Today, there are a plethora of MySQL versions on the market, the majority of which have identical functionality and syntax. It runs on a variety of operating systems, including Linux, Mac OS X, Windows, Free BSD, Solaris, and others.

To know more about MySQL, visit this link.

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

What is Amazon Redshift?

Amazon Redshift Logo
Image Source

Amazon Redshift (based on PostgreSQL 8.0.2) is a columnar database that supports scalable architecture and multi-node processing. Due to its ability to handle and process a huge influx of data and easy setup options, it is emerging as the engineer’s favorite choice for an OLAP system.

The Redshift architecture is composed of several computing resources known as Nodes, which are then arranged into Clusters. The key benefit of Redshift is its great scalability and quick query processing, which has made it one of the most popular Data Warehouses even today. Moreover, you can use AWS Console or Cluster APIs to add Nodes in just a few clicks and then smoothly scale up your storage and processing performance requirements.

To know more about Amazon Redshift, visit this link.

Why Do We Need to Move Data from MySQL to Redshift?

Every business needs to analyze its data to get deeper insights and make smarter business decisions. However, performing Data Analytics on huge volumes of historical data and real-time data is not achievable using traditional Databases such as MySQL. MySQL can’t provide high computation power that is a necessary requirement for quick Data Analysis. Companies need Analytical Data Warehouses to boost their productivity and run processes for every piece of data at a faster and efficient rate.

Amazon Redshift is a fully managed Could Data Warehouse that can provide vast computing power to maintain performance and quick retrieval of data and results. Moving data from MySQL to Redshift allow companies to run Data Analytics operations efficiently. Redshift columnar storage increases the query processing speed.

Methods to Set up MySQL to Redshift Integration

Method 1: Manually Set up MySQL to Redshift Integration

MySQL provides a COPY command that allows you to extract data programmatically in SQL Files. This data needs to be converted into CSV format because SQL format is not supported by Redshift. Next, you would need to prepare this data and load it to Amazon S3 and then to Redshift. This would need you to invest in deploying dev resources, who understand both MySQL and Redshift infrastructures, and can set up the data migration from scratch. This would also be a time-consuming approach.

Method 2: Using Hevo Data to Set up MySQL to Redshift Integration

Hevo Data provides a hassle-free solution and helps you directly transfer data from MySQL to Redshift and numerous other Databases/Data Warehouses or destinations of your choice instantly without having to write any code. Hevo comes with a graphical interface that allows you to configure your MariaDB source and load data in real-time. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Hevo’s pre-built with 100+ data sources (including 40+ free data sources) will take full charge of the data transfer process, allowing you to focus on key business activities.

Get Started with Hevo for Free

Methods to Set up MySQL to Redshift

There are two methods to set up MySQL to Redshift:

Method 1: Manually Set up MySQL to Redshift Integration

Now, since you have the idea about MySQL and Amazon Redshift, it’s time to move on to the method to connect MySQL to Redshift using custom ETL scripts. You can follow the below-mentioned steps to connect MySQL to Redshift.

Step 1. Dump the Data into Files

The most efficient way of loading data in Amazon Redshift is through the COPY command that loads CSV/JSON files into the Amazon Redshift. So, the first step is to bring the data in your MySQL database to CSV/JSON files.

There are essentially two ways of achieving this:

1) Using mysqldump command.
mysqldump -h mysql_host -u user database_name table_name --result-file table_name_data.sql

The above command will dump data from a table table_name to the file table_name_data.sql. But, the file will not be in CSV/JSON format required for loading into Amazon Redshift. This is how a typical row may look like in the output file:

INSERT INTO `users` (`id`, `first_name`, `last_name`, `gender`) VALUES (3562, ‘Kelly’, ‘Johnson’, 'F'),(3563,’Tommy’,’King’, 'M');

The above rows will need to be converted to the following format:

"3562","Kelly","Johnson", "F"
"3563","Tommy","King","M"
2) Query the data into a file.
mysql -B -u user database_name -h mysql_host  
-e "SELECT * FROM table_name;" |  
sed "s/'/'/;s/t/","/g;s/^/"/;s/$/"/;s/n//g"  
> table_name_data.csv

You will have to do this for all tables:

for tb in $(mysql -u user -ppassword database_name -sN -e "SHOW TABLES;"); do
     echo .....;
done

Step 2. Clean and Transform

There might be several transformations required before you load this data into Amazon Redshift. e.g. ‘0000-00-00’ is a valid DATE value in MySQL but in Redshift, it is not. Redshift accepts ‘0001-01-01’ though. Apart from this, you may want to clean up some data according to your business logic, you may want to make time zone adjustments, concatenate two fields, or split a field into two. All these operations will have to be done over files and will be error-prone.

Step 3. Upload to S3 and Import into Amazon Redshift

Once you have the files to be imported ready, you will upload them to an S3 bucket. Then run copy command:

COPY table_name FROM 's3://my_redshift_bucket/some-path/table_name/' credentials  
'aws_access_key_id=my_access_key;aws_secret_access_key=my_secret_key';

Again, the above operation has to be done for every table.

Once the COPY has been run, you can check the stl_load_errors table for any copy failures.

In a happy scenario, the above steps should just work fine. However, in real-life scenarios, you may encounter errors in each of these steps. e.g. :

  • Network failures or timeouts during dumping MySQL data into files.
  • Errors encountered during transforming data due to an unexpected entry or a new column that has been added
  • Network failures during S3 Upload.
  • Timeout or data compatibility issues during Redshift COPY. COPY might fail due to various reasons, a lot of them will have to be manually looked into and retried.

Challenges of Connecting MySQL to Redshift using Custom ETL Scripts

Challenges in Connecting MySQL to Redshift
Image Source

The custom ETL method to connect MySQL to Redshift is effective. However, there are certain challenges associated with it. Below are some of the challenges that you might face while connecting MySQL to Redshift:

  1. In cases where data needs to be moved once or in batches only, the custom script method works. This approach fails if you have to move data from MySQL to Redshift in real-time.
  2. Incremental load (change data capture) becomes tedious as there will be additional steps that you need to follow to achieve the connection.
  3. Often, when you write code to extract a subset of data, those scripts break as the source schema keeps changing or evolving. This can result in data loss.

The process mentioned above is brittle, error-prone, and often frustrating. These challenges impact the consistency and accuracy of the data available in your Amazon Redshift in near real-time. These were the common challenges that most users find while connecting MySQL to Redshift.

Method 2: Manually Set up MySQL to Redshift Integration

MySQL to Redshift Banner
Source: Self

Hevo Data, a No-code Data Pipeline, helps you directly transfer data from MySQL and 100+ other data sources to Data Warehouses such as Redshift, Databases, BI tools, or a destination of your choice in a completely hassle-free & automated manner. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

Hevo Data takes care of all your Data Preprocessing needs and lets you focus on key business activities and draw a much powerful insight on how to generate more leads, retain customers, and take your business to new heights of profitability. It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. 

The following steps can be implemented to set up MySQL to Redshift Migration using Hevo:

  • Configure Source: Connect Hevo Data with Oracle by providing a unique name for your Pipeline along with information about your MySQL database such as its name, IP Address, Port Number, Username, Password, etc.
Configuring the MySQL Source - MySQL to Redshift
Source: Self
  • Integrate Data: Complete MySQL to Redshift Migration by providing your MySQL database and Redshift credentials such as your authorized Username and Password, along with information about your Host IP Address and Port Number value. You will also need to provide a name for your database and a unique name for this destination.
Configuring the Redshift Destination - MySQL to Redshift
Source: Self
Sign up here for a 14-Day Free Trial!

Advantages of Using Hevo

There are a couple of reasons why you should opt for Hevo over building your own solution to migrate data from CleverTap to Redshift.

  • Minimal Setup – The complexity, as well as the time spent on setup on Hevo, is much less compared to building the setup yourself, giving you a hassle-free experience.
  • Automatic Schema Detection and Mapping: Hevo scans the schema of incoming CleverTap automatically. In case of any change, Hevo seamlessly incorporates the change in Redshift. 
  • Fully Managed Solution: Hevo is a completely managed data solution, relieving users from any maintenance aspect. 
  • Automatic Real-time Recurring Data Transfer: CleverTap allows you to set up automatic recurring data exports, with a minimum frequency of every 4 hours (max up to every 24 hours). Whereas with Hevo, data is recurrently transferred to Redshift in real-time as well as daily or weekly, just the way you need.
  • No Data Loss: The fault-tolerant architecture of Hevo makes sure that there is zero loss of your precious data while moving from CleverTap to Redshift.
  • Added Integration Option – Hevo ensures that you get the similar power of moving data from other sources to Redshift with ease. Hevo natively integrates with several Databases, Sales and Marketing Applications, Analytics platforms, etc. 
  • Strong Customer Support – Hevo team guarantees you have 24×7 support over Call, E-Mail, and Chat.
  • Ability to Transform Data – Hevo allows you to transfer data both before and after moving it to the Data Warehouse. This ensures that you always have analysis-ready data in your Redshift Data Warehouse.

Simplify your Data Analysis with Hevo today!

Incremental Load to Amazon Redshift

If your MySQL tables are big (read > 50 GB), you may not want to upload complete tables every time you run the jobs. You will want to upload only the data that has been newly added or changed.

The following steps in your process will change for incremental load:

1. Dump the data to files

mysqldump cannot be used anymore. You will have to query the data into a file:

mysql -B -u user database_name -h mysql_host  
-e "SELECT * FROM table_name WHERE updated_ts < now() AND ((updated_ts = '#max_updated_ts_in_last_run#' AND id > #max_id_in_last_run#) OR updated_ts >'#max_updated_ts_in_last_run#') ORDER BY updated_ts, id ASC"|  
sed "s/'/'/;s/t/","/g;s/^/"/;s/$/"/;s/n//g"  > table_name_data.csv

2. Import Mysql Data To Amazon Redshift

You can not simply COPY the files into Amazon Redshift. Doing that might result in duplicate Primary Keys as Amazon Redshift doesn’t enforce Primary Key constraints. The process to import these Mysql data files to Redshift will be as follows:

A. Create a temporary table to copy data

CREATE TEMP TABLE temp_table_name (like table_name)

B. COPY the files into the temporary table

COPY temp_table_name FROM 's3://my_redshift_bucket/some-path/table_name/' credentials  'aws_access_key_id=my_access_key;aws_secret_access_key=my_secret_key';

C. DELETE rows that have been updated from the original table

BEGIN transaction;
DELETE FROM table_name USING temp_table_name WHERE table_name.primarykey = temp_table_name.primarykey;

D. Insert all of the rows from the temporary table.

INSERT INTO table_name SELECT * FROM temp_table_name;
END transaction;

E. Drop the temporary table.

DROP TABLE temp_table_name;

Now, you are in the position to connect MySQL to Redshift and replicate the data with ease.

Conclusion

This article provided you with a detailed approach using which you can successfully connect MySQL to Redshift. You also got to know about the limitations of connecting MySQL to Redshift using the custom ETL method. Big organizations can employ this method to replicate the data and get better insights by visualizing the data. Thus, connecting MySQL to Redshift can significantly help organizations to make effective decisions and stay ahead of their competitors.

Visit our Website to Explore Hevo

Businesses can use automated platforms like Hevo Data to set this integration and handle the ETL process. It helps you directly transfer data from a source of your choice to a Data Warehouse, Business Intelligence tools, or any other desired destination in a fully automated and secure manner without having to write any code and will provide you a hassle-free experience.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Share your experience of connecting MySQL to Redshift in the comments section below!

mm
Founder and CTO, Hevo Data

Sourabh has more than a decade of experience building scalable real-time analytics and has worked for companies like Flipkart, tBits Global, and Unbxd. He is experienced in technologies like MySQL, Hibernate, Spring, CXF, php, ExtJS and Shell.

No-code Data Pipeline for Redshift