Companies are continually seeking out efficient solutions that improve their data management and analytics. Migrating to a new architecture like RA3 node by Amazon Redshift helps companies share data between Clusters and improve cross-database queries. 

Amazon Redshift is a fully managed, Cloud-based Data Warehouse service. It’s a viable solution for companies that have at least data on a petabyte scale. Redshift can leverage the MPP technology only at that scale. The Redshift RA3 Cluster enables you to store and use your data to acquire new insights for your business and customers.

In this article, you will gain information about Amazon Redshift, its key features, and the Node types supported by Redshift. You will also gain a holistic understanding of Redshift RA3 Nodes and the benefits of migrating to Redshift RA3 Node. Read along to find out in-depth information about Amazon Redshift RA3 Nodes.

Prerequisites

  • Working knowledge of Amazon Redshift.
  • An Amazon Redshift account.

Introduction to Amazon Redshift

Amazon Redshift RA3 Node

Amazon Web Services (AWS) is a subsidiary of Amazon saddled with the responsibility of providing a cloud computing platform and APIs to individuals, corporations, and enterprises. AWS offers high computing power, efficient content delivery, database storage with increased flexibility, scalability, reliability, and relatively inexpensive cloud computing services.

Amazon Redshift is built on industry-standard SQL with functionalities to manage large datasets, support high-performance analysis, provide reports, and perform large-scaled database migrations. Amazon Redshift contains a Leader Node and Clusters of Compute Nodes that perform analytics on data. The below snap depicts the schematics of Amazon Redshift architecture.

Redshift RA3 : Data Warehouse cluster
Redshift RA3 : Data Warehouse cluster

For further information on Amazon Redshift, you can follow the Official Documentation.

Key Features of Amazon Redshift

The key features of Amazon Redshift are as follows:

1) Massively Parallel Processing (MPP)

Massively Parallel Processing (MPP) is a distributed design approach in which the divide and conquer strategy is applied by several processors to large data jobs. A large processing job is broken down into smaller jobs which are then distributed among a Cluster of Compute Nodes.

2) Fault Tolerance

Data Accessibility and Reliability are of paramount importance for any user of a database or a Data Warehouse. Amazon Redshift monitors its Clusters and Nodes around the clock. When any Node or Cluster fails, Amazon Redshift automatically replicates all data to healthy Nodes or Clusters.

3) Redshift ML

Amazon Redshift houses a functionality called Redshift ML that gives data analysts and database developers the ability to create, train, and deploy Amazon SageMaker models using SQL seamlessly.

4) Column-Oriented Design

Amazon Redshift is a Column-oriented Data Warehouse. This makes it a simple and cost-effective solution for businesses to analyze all their data using their existing Business Intelligence tools. Amazon Redshift achieves optimum query performance and efficient storage by leveraging Massively Parallel Processing (MPP), Columnar Data Storage, along with efficient and targeted Data Compression Encoding schemes.

Understanding Clusters and Nodes in Redshift

Amazon Redshift Data Warehouse runs on Nodes, the computing resources organized into a group called a Cluster. Each Cluster contains one or more Databases. And every Cluster has one leader Node that receives queries from clients and passes them to one or more Clusters of Compute nodes. 

Amazon Redshift offers three types of Nodes to accommodate workload, depending on your requirements. The following Nodes are as follows:

1) Redshift RA3 Node

With the help of the Redshift RA3 Node, users can optimize and scale their Data Warehouse by independently paying for managed storage and computing. With Redshift RA3 Node, users are responsible for choosing and paying for the number of Nodes their business requires based on performance. Companies only pay for the managed storage that they use. It’s best to choose the size of your Redshift RA3 Cluster based on the amount of data your company processes on a day-to-day basis.

2) Redshift DC2 Node

Unlike Redshift RA3 Node, the Redshift DC2 Node doesn’t shift extra workload to Amazon S3. DC2 Nodes users can choose the number of Nodes required based on Performance and Data Size. As the data size grows, users have to add more compute Nodes to increase their Cluster’s storage capacity. DC2 Node stores your data locally in local SSD storage for high performance. 

If your dataset is under 1TB uncompressed, the DC2 Node offers the best performance at the lowest price. However, businesses that expect to scale their business and have growing data storage requirements should use Redshift RA3 Nodes as it allows an independent choice of Size, Compute, and Storage

3) Redshift DS2 Node

Redshift DS2 Nodes work on Hard Disk Drives (HDD). However, Redshift recommends using Redshift RA3 Nodes for better Performance and Storage

Simplify Redshift ETL and Analysis with Hevo’s No-code Data Pipeline

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Start for free now!

Get Started with Hevo for Free

Understanding Redshift RA3 Node

Redshift RA3 Node is the latest addition to the Redshift Cluster Node types after Dense Storage (DS2) and Dense Compute (DC2). It was launched at re:Invent 2019. The principal difference with Redshift RA3 Node is that it comes with Redshift Managed Storage (RMS), a separate storage layer. Redshift Managed Storage uses high-performance local SSDs for Hot Data and Amazon S3 for Cold Data. In addition, RMS’s high bandwidth networking is built on the AWS Nitro System, which reduces the time for data offloading and retrieving from Amazon S3. 

Amazon Redshift uses large, high-performance SSDs and Amazon S3 in each Redshift RA3 Node in managed storage, resulting in fast local storage and long-term durable storage. Whenever a Redshift RA3 Node’s data grows beyond the capacity of local SSDs, the Redshift system automatically offloads the excess data to Amazon S3. In addition, users pay the same rate even if the data sits in S3 rather than high-performance SSDs. If a company’s workload requires ever-growing storage, Redshift RA3 managed storage can automatically scale the data warehouse storage capacity, and you don’t have to pay for additional Nodes.

Redshift RA3 Node Pricing Structure

Amazon Redshift RA3 Node Pricing Structure
Amazon Redshift RA3 Node Pricing Structure

Amazon Redshift has on-demand pricing for Redshift RA3 Nodes, with no commitments and no upfront costs. It allows users to pay for capacity by the hour. You simply pay an hourly rate for the type and number of Redshift RA3 Nodes in your Cluster. Partial hours are billed in one-second increments following a billable status change such as creating, deleting, pausing, or resuming the Cluster.

You can suspend on-demand billing when the Redshift RA3 Cluster is paused with the Pause and Resume feature. When you pause a Redshift RA3 Cluster, you only have to pay for backup storage. This feature makes Redshift cost-effective and ensures that companies don’t have to plan and purchase Data Warehouse capacity. Instead, they can focus on managing environments for development.

Benefits of Redshift RA3 Node

The benefits of using Redshift RA3 Node are as follows:

1) Decoupled Storage

One of the benefits of migrating to Redshift RA3 Node is taking advantage of the increased Storage to Compute Ratio. On adding every ra3.4xl or ra3.16xl Node to the Cluster, you get access to 128 TB storage capacity. In addition, the Redshift RA3 Node has on-demand pricing where you only pay for what you consume and can pause the Cluster when not in use. By migrating one ds2.8xl Node to two ra3.4xl Nodes, you can increase the storage capacity from 16 TB to 256 TB. 

2) Access to New features

Several new features are only available on the Redshift RA3 Node type, such as AQUA, Cluster Relocation, and Data Sharing. The DC2 and DS2 Node types couldn’t support them. 

  • AQUA: RMS stores all data on S3 and then moves it to a local SSD layer when a query requests access by the compute layer. AQUA reduces data movement between storage and compute layers, enhancing query performance. For example, AQUA supports scan and aggregation operations when queries contain one predicate with a LIKE or SIMILAR TO expression. 
  • Data Sharing: With more than one Cluster on Redshift, with Redshift RA3 Node, you can share live data stored on RMS between the Clusters. For example, you can have one Cluster for writing data to RMS and another Cluster for consuming the data.
  • Cross-AZ Cluster Relocation: Before Redshift RA3 Node, Redshift was hosted entirely on a single Availability Zone (AZ). If a failure occurs in the host AZ, you must take a separate AZ and restore a snapshot to a new Cluster. However, the new RMS is not bound to a single AZ, and you only need to restore the Compute Nodes in a separate AZ in case of failure, called Cluster Relocation. Once you enable this feature, Redshift can automatically relocate your Cluster if issues arise with a certain AZ level. 

3) Storage Tiering

Redshift DS2 and DC2 Nodes have restricted storage capacity. However, with Redshift RA3 Node, you can offload older data from the Cluster to S3 without an additional cost and expose them in Redshift using the Spectrum service. It costs $23.55 per month to store 1 TB of data in the S3 Standard class and slightly more at $24.58 per month to hold it in RMS. 

However, the operational cost of using Spectrum will become cost-effective if you choose a different S3 storage class. For example, the price is almost halved to $12.8 if you go for Standard-Infrequent Access. However, if you have to access data more frequently, Standard-IA costs more, and your operational costs could quickly exceed since a ‘GET’ request against Standard-IA costs 2.5 times more than against Standard. 

Ways to Migrate Cluster to Redshift RA3 Node

You can migrate from Redshift DS2 and DC3 Nodes to Redshift RA3 Nodes in one of the following three ways:

1) Snapshot and Restore

You can use the most recent snapshot of your Redshift DS2 or DC2 Cluster to migrate to a new Redshift RA3 Cluster. The Cluster creation usually takes a few minutes, and once it’s complete, Redshift RA3 Nodes are ready to take your workload. Since storage and compute are independent in the Redshift RA3 Cluster Node type, all your hot data will migrate to a local cache. While migrating through a snapshot, it’ll preserve hot block information and populate the local cache in the new Cluster with the hottest blocks first. Snapshot also keeps the information about the number of Nodes, Node type, and master user name. 

The Cluster will be restored in the same AWS Region. You’ll be given a random, system-chosen AZ unless specified. While migrating Cluster(s) from a snapshot, you can choose a compatible maintenance track for the new Cluster. You can keep the same endpoint for your users and applications by renaming the new Redshift RA3 Cluster with the same name as your original Cluster.

2) Elastic Resize

Among the three methods, Elastic Resize is the fastest way to resize and migrate a Redshift DS2 or DC2 Node to Redshift RA3 Node. Elastic Resize allows you to change Node types for an existing Cluster and add or remove Nodes.

The Elastic Resize method will automatically redistribute the data to the new Nodes if you have the same Node type during Cluster resize. However, there will be a short increase in query execution time while the data is redistributed in the background. 

3) Classic Resize

The Classic Resize migration operation parallelly copies data from the Redshift RA3 Node(s) in your source Cluster to the Compute Node(s) in the target Cluster. Migration can take anywhere between a couple of hours to days or longer depending upon the data and the number of Nodes in the smaller Cluster. Factors affecting migration time include the workload on the source Cluster, the number, and size of the tables being transferred, even/uneven data distribution across the Compute Nodes, and Node configuration in the source and target Clusters.

During the resize operation, Redshift puts the existing Cluster into a read-only mode. You can’t run queries that write to the database during the read-only time, including read-write queries. The only queries you can run are the ones that read from the database. 

Conclusion

In this article, you have learned about the Amazon Redshift, its key features, and the types of Nodes supported by Amazon Redshift. It also provided detailed information on Redshift RA3 Node and its benefits.

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations with a few clicks.

Hevo Data with its strong integration with 150+ data sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice such as Amazon Redshift, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools. 

Want to take Hevo for a spin?

Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the amazing pricing, which will assist you in selecting the best plan for your requirements.

Share your experience of understanding Amazon Redshift RA3 Node in the comment section below! We would love to hear your thoughts.

Osheen Jain
Technical Content Writer, Hevo Data

Osheen is a seasoned technical writer with over a decade of experience in the data industry. She specializes in writing about B2B, technology, finance, and SaaS domains. Her passion for simplifying intricate technical concepts has established her as a respected expert in the field, making her an invaluable resource for those looking to deepen their understanding of data science.

No-code Data Pipeline for Amazon Redshift