Companies use Data Warehouses to store all their business data from multiple data sources in one place to run analysis and generate valuable insights from it. Amazon Redshift a widely used Cloud Data Warehouse service by AWS (Amazon Web Services) provides many tools and features that allow companies to easily query and analyze data according to the need.

There are several instances where Data Scientists or Data Analysts need to analyze a smaller chunk of a large dataset but not small enough. Amazon Redshift Unload helps users to save the result of query data into Amazon S3. It can be used to analyze data in BI tools.

Amazon Redshift Unload saves the query result in Apache Parquet format that is 2x faster and consumes 6x less storage. In this article, you will learn about the importance of the Amazon Redshift Unload command along with the syntax and some examples.

What is Amazon Redshift?

Amazon Redshift Logo

Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple & cost-effective to analyze all of the data using standard SQL and the existing Business Intelligence (BI) tools. 

It has the following features that make it pretty useful compared to other Amazon Data Warehouses like Amazon S3, RDS, or Amazon DynamoDB.

  • Redshift gives you fast querying capability over structured data using familiar SQL based clients & BI Tools. 
  • You can run complex queries against loads of data (in terabytes or petabytes) of structured data using very sophisticated query optimization & massively parallel query execution. 
  • Costing is minimal ranging from just 0.25$ per hour with & can be scaled to 1000$ per terabyte per year which is actually less than the cost of traditional on-premise solutions. Plus no extra cost is involved for additional security features. 
  • Amazon Redshift automatically takes backups of your data & stores them for a definite amount of retention time. 
  • It also has an additional feature with the name of Amazon Redshift Spectrum allowing one to run SQL queries directly against loads of unstructured data as well in Amazon S3 data lakes. 

Overall, Amazon Redshift provides relatively cost-effective & features-enriched solutions to meet your data warehousing needs. 

Like all other services, Redshift can also be accessed through AWS Console & its dashboard looks like this as shown in the image below. 

Few more features include:- 

  • Columnar Data Storage
  • Advanced Compression 
  • Massively Parallel Processing (MPP)
  • Redshift Spectrum 
  • Materialized Views
  • Scalability 

More detail can be found on the official documentation page.

Hevo, A Simpler Alternative to Integrate your Data for Analysis

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Start for free now!

Get Started with Hevo for Free

Redshift Unload Command

Amazon Redshift is developed around industry-standard SQL with added functionality to manage large datasets & support high-performance analysis & reporting of those datasets. As we know that SQL language consists of commands that you use to create & manipulate database objects, run queries, load tables, and modify the data in the tables. 

Redshift Unload command is a great tool that actually compliments the Redshift Copy command by performing exactly the opposite functionality. 

While Copy grabs the data from an Amazon S3 bucket & puts it into a Redshift table, Unload takes the result of a query, and stores the data in Amazon S3. 

The Need for Redshift Unload Command

If you ever need your data to be consumed by different applications, or if you want to analyze your data in ways that you can’t do in Redshift (e.g. machine learning), then it makes sense to export it. 

How to Use Redshift Unload Command? 

Firstly, you need to determine the data you want to unload into S3. Play around with your SELECT statement until you get it right. Once you have that, here are the basic steps:

UNLOAD ('select-statement')
TO 'S3://object-path/name-prefix'
authorization
[ option [ ... ] ]

where option is
{ [ FORMAT [ AS ] ] CSV | PARQUET
| PARTITION BY ( column_name [, ... ] ) [ INCLUDE ]
| MANIFEST [ VERBOSE ] 
| HEADER           
| DELIMITER [ AS ] 'delimiter-char' 
| FIXEDWIDTH [ AS ] 'fixedwidth-spec'   
| ENCRYPTED [ AUTO ]
| BZIP2  
| GZIP 
| ZSTD
| ADDQUOTES 
| NULL [ AS ] 'null-string'
| ESCAPE
| ALLOWOVERWRITE
| PARALLEL [ { ON | TRUE } | { OFF | FALSE } ]
| MAXFILESIZE [AS] max-size [ MB | GB ] 
| REGION [AS] 'aws-region' }

Standard Redshift Unload Command Parameters

Select Statement

A standard select query & its results are unloaded. 

Authorization 

The UNLOAD command needs the authorization to write data to Amazon S3. More details on authorization can be found here

PARTITION BY ( column_name [, … ] ) [INCLUDE]

This command specifies the partition keys for the unload operation. UNLOAD command automatically partitions output files into partition folders based on the partition key values, following the Apache Hive convention.

MANIFEST [ VERBOSE ]

Creates a manifest file that explicitly lists details for the data files that are created by the UNLOAD process. The manifest is a text file in JSON format that lists the URL of each file that was written to Amazon S3.

Header

Adds a header line containing column names at the top of each output file.

DELIMITER AS ‘delimiter_character‘ 

Specifies a single ASCII character that is used to separate fields in the output file, such as a pipe character ( | ), a comma ( , ), or a tab ( t ).

REGION [AS] ‘aws-region

Specifies the AWS Region where the target Amazon S3 bucket is located. REGION is required for Redshift UNLOAD to an Amazon S3 bucket that isn’t in the same AWS Region as the Amazon Redshift cluster.

These are just a few parameters. Details about all the parameters can be found in the official documentation

Redshift Unload Command Examples 

There are many use cases or examples of Redshift Unload commands depending upon the scenario. A couple of examples are mentioned below:- 

First Example:- Unload Table to a CSV File

The following example unloads the VENUE table and writes the data in CSV format to S3://mybucket/unload/.

Here is the syntax of Unload File:- 

unload ('select * from tablename')
to 'S3://mybucket/unload/' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
CSV;

Let’s say the table looks like this:- 

idNameCityVenues
1Name1City1Venue1
2Name2City2Venue2

The Unlock file will look like this:- 

1,Name1,City1,Venue1

2,Name2,City2,Venue2

Second Example:- Unload Table to Encrypted Files

The following example unloads the VENUE table to a set of encrypted files using a KMS key. If you specify a manifest file with the ENCRYPTED option, the manifest file is also encrypted. For more information, see Unloading encrypted data files.

Unload syntax is:- 

unload ('select * from tablename')
to 'S3://mybucket/venue_encrypt_kms' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
kms_key_id '1234abcd-12ab-34cd-56ef-1234567890ab' 
manifest
encrypted;

The following example unloads the VENUE table to a set of encrypted files using a master symmetric key.

unload ('select * from tablename')
to 'S3://bucket/venue_encrypt_cmk' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
master_symmetric_key 'EXAMPLEMASTERKEYtkbjk/OpCwtYSx/M4/t7DMCDIK722' 
encrypted;

Conclusion

Overall the Unload command in AWS Redshift is as important as the Copy command. There are some helpful tools available like Hevo Data which makes the process of extracting data from Redshift a matter of a few clicks. More details on Hevo Data & Redshift integration.

Visit our Website to Explore Hevo

Hevo Data provides its users with a simpler platform for integrating data for analysis. It is a no-code data pipeline that can help you combine data from multiple sources. 

It provides you with a consistent and reliable solution to managing data in real-time, ensuring that you always have analysis-ready data in your desired destination. Your job will be to focus on key business needs and perform insightful analysis using BI tools. 

Want to take Hevo for a spin?  Sign Up here for a 14-day free trial and experience the feature-rich Hevo suite first hand.

Share your learning experience about Amazon Redshift Unload Commands in the comments section below! 

Sarthak Bhardwaj
Customer Experience Engineer, Hevo

Sarthak is a skilled professional with over 2 years of hands-on experience in JDBC, MongoDB, REST API, and AWS. His expertise has been instrumental in driving Hevo's success, where he excels in adept problem-solving and superior issue management. Sarthak's technical proficiency and strategic approach have consistently contributed to optimizing operations and ensuring seamless performance, making him a vital asset to the team.

All your customer data in one place.