AWS Redshift Best Practices for 2023

Sarad Mohanan • Last Modified: January 25th, 2023

Redshift Best Practices

Are you setting up AWS Redshift as your organization’s Data Warehouse? Or are you wondering what the guidelines to make the most out of your Redshift Data Warehouse are? Do you wish to know the best practices to ensure optimal performance on Amazon Redshift? If the answer to any of these questions is yes, then you are in the right place.

This article seeks to explore the various components that Redshift comprises of, help you understand the various AWS Redshift Best Practices, and provide guidelines that can help you make the best out of the real-world Redshift cluster operations on the AWS Public Cloud. These AWS Redshift Best Practices will make your data warehousing operations a lot smoother and better.

Table of Contents

What is AWS Redshift?

Amazon Redshift Logo
Image Source: Sisense

AWS Redshift is a very cost-effective Cloud Data Warehouse that gives you access to high-performance and high-quality analytical services that can help you turn your organization into a Data-driven Enterprise.

It costs less than $1,000/TB/Year which is roughly 10% of the Traditional Data Warehouse TCO (Total Cost of Ownership). It is capable of very high performance on datasets ranging from a gigabyte to a petabyte and beyond. 

It uses modern Data Warehousing Architectural approaches like Columnar Storage, Data Compression, and Zone Mapping to reduce the amount of I/O that is needed to perform queries and can offer significantly better performance than an OLTP Data Warehouse optimized for the same use case.

More information on AWS Redshift can be found here.

Replicate Data into AWS Redshift in Minutes Using Hevo’s No-Code Data Pipeline

Hevo Data, a Fully-managed Data Pipeline platform, can help you automate, simplify & enrich your data replication process in a few clicks. With Hevo’s wide variety of connectors and blazing-fast Data Pipelines, you can extract & load data from 100+ Data Sources straight into your Data Warehouse or any Databases.

To further streamline and prepare your data for analysis, you can process and enrich raw granular data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!

Get Started with Hevo for Free

Hevo is the fastest, easiest, and most reliable data replication platform that will save your engineering bandwidth and time multifold. Try our 14-day full access free trial today to experience an entirely automated hassle-free Data Replication!

History of AWS Redshift

Since the launch of an initial Preview version in November 2012 followed by a full release in February 2013, Redshift has become one of the fastest-growing services in the AWS ecosystem and has one of the largest Cloud Data Warehouse Deployments.

AWS took an older version of PostgreSQL and packaged that with OLAP, MPP, and Columnar Storage. It also works with all the AWS-dependent services such as:

  • Amazon S3
  • AWS KMS
  • Amazon VPC
  • Amazon SWF
  • Amazon Route S3
  • Amazon CloudWatch
  • Amazon EC2
  • IAM

AWS Redshift Cluster Architecture

The cluster is split into two parts:

AWS Redshift Best Practices: Cluster Architecture
Image Source: Amazon

Leader Node

There are two or more Compute Nodes attached to a Leader Node. The Leader Node:

  • Is the SQL endpoint, so your Tableau servers, and Workbench all connect to the leader node.
  • Exposes PostgreSQL catalog tables.
  • Stores Metadata.
  • Coordinates parallel SQL processing.
  • Does parsing and execution of your queries. When a query is submitted to a Leader Node, it converts that to C++ code and pushes it down to the Compute Nodes for them to execute.

Compute Nodes

On the Compute Nodes, parallel query execution processes occur. Backup and restore processes also occur in AWS S3. AWS continuously backs up data into S3 for you. Simultaneous replication processes are also happening between the Compute Nodes.

Local storage is attached to the Compute Nodes which contains your:

  • Disks.
  • Slices.
  • Tables (columns, blocks).

Best Practices for Amazon Redshift

Having understood the basics of AWS Redshift architecture, you will now learn some of the top Redshift Best Practices associated with the following parameters:

1) Amazon Redshift Best Practices: Data Distribution

Performance in Redshift comes from an MPP (Massively Parallel Processing) architecture that enables the system to parallelize and distribute SQL operations taking advantage of all the resources that are available in your cluster.

To leverage the parallel processing system, ideally, you would want to distribute your data across the cluster.

There are four ways of distributing data in Redshift:

KEY Distribution

You can distribute data using KEYS where you define a KEY such that it describes how you want to distribute the data across your Compute Nodes. The rows are distributed according to the values in the joining columns so that matching values from the common columns are physically stored together.

ALL Distribution

You can distribute your data into all the slices in your Redshift cluster. A copy of the entire table is distributed to every node. 

EVEN Distribution

In EVEN distribution, the leader node distributes the rows across the slices using a round-robin algorithm. EVEN distribution is appropriate when a table does not participate in joins or when there is no clear choice between KEY distribution and ALL distribution.

AUTO Distribution

Redshift assigns an optimal distribution style based on the size of the table data. If you don’t specify a distribution style with the CREATE TABLE statement, Amazon Redshift applies AUTO distribution.

Ensure that your data is evenly distributed and your KEY is able to distribute data across your Compute Nodes appropriately. If you have lopsided distribution you will see that your cluster won’t perform optimally.

2) Amazon Redshift Best Practices: Disks

The underlying hardware is designed for high-performance data processing using features such as locally attached storage to maximize throughput between CPUs and Drives. Each Compute Node contains ephemeral Storage Disks with each node having between 1, 3, 8, and 24 disks depending on the node type.

Each disk is split into two partitions:

  • Local data storage is accessed by Compute Nodes.
  • Mirrored/remote data accessed by remote Compute Nodes.

AWS exposes internal views that showcase Disks for you. You can take a look at these views to understand how much of the data of the Remote Node is stored in your local Compute Nodes.

3) Amazon Redshift Best Practices: Blocks

Blocks are immutable 1MB chunks, that are used to persist Columnar Data in Redshift.

A block contains In-memory Metadata on it with:

  • Zone Maps Information
  • MVCC Data
  • Location of previous/next Block

A full Block contains between 16 and 8.4 million values.

Blocks are always formatted when you do an update. During an update event, AWS has to format your Block and a new Block has to be picked up when an update is done. You, therefore, need to be careful when making many updates. In this case, you have to do a vacuum and a deep copy right after an update to reclaim space and sorts. By doing this, you avoid having ghost rows inside Blocks.

What Makes Hevo’s ETL Process Best-In-Class

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-Day Free Trial!

4) Amazon Redshift Best Practices: Queries

I/O in your database is the biggest concern for query performance. Redshift was designed to reduce I/O but you have to use these components wisely to achieve maximum throughput.

You can have multiple Redshift clusters hitting your data in S3 through a Spectrum cluster which means you are able to increase the concurrency for your Redshift cluster.

Using a Spectrum cluster allows you to run SQL queries directly against data stored in your Amazon S3 Data Lake using thousands of Compute Nodes without having to load the data into your Redshift cluster.

Spectrum supports SQL and is ANSI compliant. You should push SQL predicates into the Spectrum layer so that your Redshift cluster is not overburdened.

The user also has to ensure the queries are written properly. The execution time for each query will increase as the volume of data being processed by that query and the amount of data moving between nodes increases.

The queries can be optimized by using the WHERE clause and limiting the number of columns being scanned by specifying the names of the required columns instead of scanning all by using “SELECT * …..” This will restrict the volume of data being processed.

5) Amazon Redshift Best Practices: Query Monitoring Rules

Redshift exposes the QMR feature which allows you to set metrics-based performance boundaries for workload management queues and concurrency, and also to specify what action to take when a query goes beyond the set boundaries. 

Monitor your inflight queries and then check if a query is taking up too much memory, too much CPU, or is returning more rows than needed. QMR allows you to kill such queries which means your data scientists cannot do runaway queries on your Redshift cluster.

6) Amazon Redshift Best Practices: Distribution Keys

Distribution keys determine where data is stored in Redshift.

You would want to look for data that is most commonly joined by the most commonly run queries. You may also want to look at queries that compute/consume the most CPU and use that as a mechanism for identifying how you should distribute data.

7) Amazon Redshift Best Practices: Sorting Data

In the slices on a disk, the data is sorted by a sort key. If there is no Sort Key, it defaults to using the insertion order.

When working with Sort Keys, you would want to use a Sort Key that is frequently used in your queries as a:

  • Query predicate (date range, identifier)
  • Join parameter with a hash key

The sort key allows Redshift to use zone maps to avoid reading entire blocks based on predicates e.g. a table containing a timestamp for a sort key, will skip blocks containing old data.

8) Amazon Redshift Best Practices: Schema Design

To optimize your data for querying, you need to co-locate data by frequently joined tables using Distribution Keys to avoid data transfers between nodes.

Use sort keys on joined columns to allow fast merge joins and the use of zone maps. This compression allows you to denormalize data without a heavy overhead or penalty in incremental storage allowing you to simplify queries and limit joins. Always denormalize commonly used join attributes onto large tables.

9) Amazon Redshift ETL Best Practices: Vacuuming Tables 

AWS does not automatically reclaim the unused space that is free when you delete rows and update rows. To perform an update, Redshift deletes the original row and appends the updated row so every update is effectively deleted followed by an INSERT.

Rows are marked for deletion but are not removed and the query processor will scan deleted rows as well as rows that are undeleted.

Vacuuming is therefore an important aspect of operating clusters if you intend to run them for a long time and make significant updates.

Vacuuming operations need to be regularly carried out to clean up tables after a bulk delete, a load, or a series of incremental updates with the Vacuum command either against the entire database or against individual tables. This will reclaim space, purge those deleted rows, and will improve query performance.

10) Amazon Redshift ETL Best Practices: COPY Command

Amazon Redshift being a Data Warehouse requires data to be integrated from various sources. The COPY command is used to load data from these sources to your Amazon Redshift Data Warehouse in bulk. This command ensures the maximum possible throughput due to its support for parallel processing. More information on the COPY command can be found here.

Using the COPY command on multiple small-sized source files rather than one big file can be considered a good Amazon Redshift ETL Practice. This is due to the fact that using it on multiple files will allow Amazon Redshift to process all of them in parallel.

11) Amazon Redshift ETL Best Practices: UNLOAD Command

The UNLOAD command is used to perform the reverse operation of the COPY command. While the COPY command is used to load data into your Amazon Redshift Data Warehouse, UNLOAD command is used to remove data from the Amazon Redshift Data Warehouse and save it in your Amazon S3 bucket or any other storage. More information on the UNLOAD command can be found here.

Using UNLOAD command instead of the SELECT command can be considered a good AWS Redshift ETL Practice since the UNLOAD command will perform the required operations on all nodes in parallel.

12) Amazon Redshift Security Best Practices: Defining User Groups

Like most Databases and Data Warehousing solutions, AWS Redshift also allows administrators to control access of users over the data stored in the Data Warehouse.

It is a good Amazon Redshift Security Practice to define groups of users that’ll have access to some part of the data in the Data Warehouse. AWS Redshift also allows administrators to control access to these groups. Hence, defining a user group and controlling its access can be considered an easier and more secure process than defining access for each person individually.

13) Amazon Redshift Security Best Practices: Encryption

If your AWS Redshift Data Warehouse stores sensitive or highly confidential data, it can be considered a good security practice to enable encryption in the cluster. Upon enabling encryption, all Data Blocks, Metadata, and backups are encrypted. Encryption keys can be managed using AWS Key Management Service (AWS KMS) or AWS CloudHSM (Cloud Hardware Security Module).

14) Amazon Redshift Security Best Practices: Public Access

You should ensure that your Amazon Redshift clusters are not publicly accessible. Publicly accessible Redshift clusters would allow anyone to view your data increasing security risks exponentially.

Conclusion

In this post, we have looked at multiple Amazon Redshift Best Practices for optimizing your cluster to pull off high-performance data analytics. These practices will let you make the best out of the varied amazing features that Redshift has to offer.

In order to utilize Redshift’s complete potential, you don’t only need to ensure that you are following all AWS Redshift Best Practices but you first need to ensure that it is set up correctly and data from all your data sources are loaded in the right form. These data sources may include cloud applications, databases, or local files. To do this, you may use custom-built Data Pipelines or opt for a completely managed Data Pipeline solution that works out of the box. Custom development will require you to invest significantly in engineering resources, whereas a No-code Data Pipeline like Hevo can automate this process without any hassles.

Visit our Website to Explore Hevo

Businesses can use automated platforms like Hevo Data to set the integration and handle the ETL process. It helps you directly transfer data from Data Warehouses like Amazon Redshift, Business Intelligence tools, or any other desired destination in a fully automated and secure manner without having to write any code and will provide you with a hassle-free experience.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite firsthand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Share your experience of learning about AWS Redshift Best Practices in the comments section below!

No-code Data Pipeline for Redshift