In this article, we’ll be going over two methods you can use to connect Amazon RDS to Amazon Aurora: using Aurora Read Replica and a third-party, no-code data replication tool.

Point to note: We’ll be migrating data to Amazon Aurora MySQL, a drop-in replacement for MySQL that makes it cost-effective and simple to operate, set up, and scale your new and existing MySQL deployments.   

Replicating Data from Amazon RDS to Amazon Aurora

Amazon RDS lets you use simple migration tools to convert your existing Amazon RDS for MySQL applications to Aurora MySQL.

There are two different types of migration when moving data to Aurora MySQL DB clusters: logical and physical. Logical migration means that the migration is executed by applying logical database changes like deletes, inserts, and updates. 

During physical migration, the database is moved using physical copies of its files.

Physical migration is more suited for large databases since it’s faster than logical migration. When a backup undergoes physical migration, database performance is unaffected. Additionally, it can move everything from the original database, even the complex parts. Complex database components might slow down or even block logical migration.

So, logical replication comes in handy when you want to replicate specific bits of the database, for instance, components of a table, or specific tables. 

Also, in logical replication, you can migrate data irrespective of the physical storage structure. 

If you’re migrating data from an RDS for MySQL DB instance, you can kickstart the integration by creating an Amazon Aurora MySQL read replica. 

Your client applications can read from the Aurora read replica when there’s no delay between the MySQL DB instance and the Aurora read replica (replica lag=0).

You can then stop replication to make the Aurora MySQL read replica a standalone MySQL DB Cluster for writing and reading.

Using Aurora Read Replica to Load Data from Amazon RDS to Amazon Aurora

A few prerequisites to keep in mind for the Amazon RDS Amazon Aurora migration process:

  • An active AWS account.
  • Amazon EC2 Key Pairs are used to connect securely to your EC2 Linux-based instances through SSH. If you already have a key pair, you can use it for the following steps. If you don’t have a key pair, you’ll need to create a new key pair in your preferred region.

Here are the steps involved in creating an Aurora read replica by using the console:

  1. Sign in to the AWS Management Console and start the Amazon RDS Console at https://console.aws.amazon.com/rds/.
  2. Choose Databases from the navigation pane.
  3. Choose the MySQL DB Instance that you want to use as the source for your Aurora read replica. Under the Actions option, choose Create Aurora read replica.
  4. Next, choose the DB cluster specifications that you want to use for the Aurora read replica. 
  5. Choose Create read replica.

When you’ve created an Aurora read replica of a MySQL DB instance, Amazon RDS will create a DB snapshot of your source MySQL DB instance.

Amazon RDS will transfer data from the DB snapshot to the Aurora read replica.

After the data from the DB snapshot has been transferred to the new Aurora MySQL DB cluster, Amazon RDS will start replicating data between the MySQL DB instance and the Aurora MySQL DB cluster.

Note: If your MySQL DB instance contains tables that use storage engines other than InnoDB, or that use compressed row format, you can ramp up the process of creating an Aurora read replica by changing those tables to use the InnoDB storage engine and dynamic row format before you create your Aurora read replica.

A few limitations of this Amazon RDS Amazon Aurora ETL process that you need to keep in mind are as follows:

  • The innodb_log_files_in_group parameter needs to be set to its default value (2).
  • The innodb_page_size parameter needs to be set to its default value (16 KB).
  • The innodb_data_file_path parameter needs to be configured with a data file named “ibdata1:12M:autoextend”. If a database has two data files or a data file with a different name, you can’t migrate them using Aurora read replicas.
  • The migration might take a while. Depending on the data volume you’re migrating, it might take several hours per tebibyte (TiB) of data.

Using a No-Code Data Replication Tool to Replicate Data from Amazon RDS to Amazon Aurora 

For scenarios where you need to migrate data from sources that might not be compatible with AWS, you can opt for a no-code data replication tool. 

You can streamline the Amazon RDS MySQL Amazon Aurora MySQL integration process by opting for an automated tool to:

  • Focus on pressing engineering goals and free up the resources needed for them.
  • Save time spent on data preparation thanks to a user-friendly UI.
  • Enable business teams to quickly create accurate reports with no-code data replication tools.
  • Access to near real-time data without sacrificing accuracy or consistency.
  • Consolidate analytics-ready data for performance measurement and opportunity exploration.

Let’s take a look at the simplicity a cloud-based ELT tool like Hevo provides to your workflow:

Step 1: Configure Amazon RDS MySQL as a Source

Amazon RDS to Amazon Aurora: Configure Your Source

Step 2: Configure Amazon Aurora MySQL as a Destination

Amazon RDS to Amazon Aurora: Configure Your Destination

And that’s it! Based on your inputs, Hevo will start replicating data from Amazon RDS MySQL to Amazon Aurora MySQL. 

Note: Your Hevo pipeline can replicate up to 4090 columns for every table.

Suppose you’d like to dive deeper into how your pipelines are configured for this use case. In that case, you can read the official documentation for configuring Amazon RDS MySQL as a source and Amazon Aurora MySQL as a destination.

However, Amazon Aurora MySQL is not a recommended destination for your production pipelines. It might run into the following issues:

  • Slow handling of the high frequency of inserts in the database.
  • Issues in handling large query traffic for analytical workloads.
  • Need for frequent and regular maintenance.

What can you hope to achieve by replicating data from Amazon RDS to Amazon Aurora?

Through Amazon RDS to Amazon Aurora integration, you’ll be able to help your business stakeholders with the following:

  • Aggregate the data of individual interaction of the product for any event. 
  • Finding the customer journey within the product (website/application).
  • Integrating transactional data from different functional groups (Sales, marketing, product, Human Resources) and finding answers. For example:
    • Which Development features were responsible for an App Outage in a given duration?
    • Which product categories on your website were most profitable?
    • How does the Failure Rate in individual assembly units affect Inventory Turnover?

Summary

In this article, we’ve talked about two ways that you can use to replicate data from Amazon RDS to Amazon Aurora MySQL: via Aurora read replicas and through a no-code data replication tool, Hevo.

Hevo allows you to replicate data in near real-time from 150+ sources like Amazon RDS MySQL to the destination of your choice including Amazon Aurora MySQL, BigQuery, Snowflake, Redshift, Databricks, and Firebolt, without writing a single line of code. We’d suggest you use this data replication tool for real-time demands that require you to pull data from SaaS sources. This’ll free up your engineering bandwidth, allowing you to focus on more productive tasks.

For rare times things go wrong, Hevo ensures zero data loss. To find the root cause of an issue, Hevo also lets you monitor your workflow so that you can address the issue before it derails the entire workflow. Add 24*7 customer support to the list, and you get a reliable tool that puts you at the wheel with greater visibility.

If you don’t want SaaS tools with unclear pricing that burn a hole in your pocket, opt for a tool that offers a simple, transparent pricing model. Hevo has 3 usage-based pricing plans starting with a free tier, where you can ingest up to 1 million records.

Schedule a demo to see if Hevo would be a good fit for you, today!

mm
Content Marketing Manager, Hevo Data

Amit is a Content Marketing Manager at Hevo Data. He enjoys writing about SaaS products and modern data platforms, having authored over 200 articles on these subjects.

All your customer data in one place.