Redshift SELECT INTO 101: Syntax, Usage, and Examples Simplified

on Amazon Redshift, AWS, Data Warehouses, ETL Tutorials • February 21st, 2022 • Write for Hevo

Redshift SELECT INTO - Featured Image

A relational database that supports procedural language allows you to assign a value to a local variable within stored procedures by using the SELECT statement. Teradata and Oracle databases, for example, support the SELECT INTO clause for assigning a value to a local variable.

In this article, we’ll look at how to use the Redshift SELECT INTO clause within Stored Procedures to assign a subquery value to a local variable.

In Redshift, the SELECT INTO statement retrieves data from one or more database tables and assigns the values to variables. To assign a previously declared variable within a stored procedure or a RECORD type variable, use the Redshift SELECT INTO.

Table of Contents

Introduction to Amazon Redshift

Redshift SELECT INTO - Amazon Redshift logo
Image Source

Amazon Redshift is a petabyte-scale data warehouse solution powered by Amazon Web Services. It is also used for large database migrations because it simplifies data management.

Amazon Redshift’s architecture is based on massively parallel processing (MPP). Amazon Redshift Databases are based on Column-Oriented Databases and are designed to connect to SQL-based clients and BI tools. This enables users to have constant access to data (structured and unstructured) and aids in the execution of Complex Analytic queries.

Amazon Redshift also supports standard ODBC and JDBC connections.
Because Amazon Redshift is a fully-managed Data Warehouse, users can automate administrative tasks to focus on Data Optimization and Data-driven Business decisions rather than performing repetitive tasks.

Each Cluster in an Amazon Redshift Data Warehouse has its own set of computing resources and runs its own Amazon Redshift Engine with at least one Database.

Key Features of Redshift

  • Massively Parallel Processing (MPP): A large processing job is divided into smaller jobs that are then distributed across a cluster of Compute Nodes. These Nodes process data in parallel rather than sequentially.
  • Integrated Analytics Ecosystem: AWS’s built-in ecosystem services make End-to-End Analytics Workflows easier to manage while avoiding compliance and operational stumbling blocks. Some well-known examples include AWS Lake Formation, AWS Glue, AWS EMR, AWS DMS, AWS Schema Conversion Tool, and others.
  • SageMaker Support: A must-have for today’s Data Professionals, it enables users to build and train Amazon SageMaker models for Predictive Analytics using data from their Amazon Redshift Warehouse.
  • ML For Optimal Performance: Amazon Redshift has robust Machine Learning (ML) capabilities that enable high throughput and speed. Its sophisticated algorithms predict incoming inquiries based on specific factions, allowing important jobs to be prioritized.
  • Fault Tolerance: Amazon Redshift continuously monitors its Clusters and Nodes. When a Node or Cluster fails, Amazon Redshift replicates all data to healthy Nodes or Clusters automatically.
Redshift SELECT INTO - Key Features of Redshift
Image Source

Simplify Amazon Redshift ETL using Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline helps to Load Data from any data source such as Databases, SaaS applications, Cloud Storage, SDKs, and Streaming Services and simplifies the ETL process. It supports 100+ Data Sources (including 40+ free data sources) and is a 3-step process by just selecting the data source, providing valid credentials, and choosing the destination. Hevo loads the data onto the desired Data Warehouse such as Amazon Redshift, enriches the data, and transforms it into an analysis-ready form without writing a single line of code.

Its completely automated pipeline offers data to be delivered in real-time without any loss from source to destination. Its fault-tolerant and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. The solutions provided are consistent and work with different Business Intelligence (BI) tools as well.

Get Started with Hevo for free

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
Sign up here for a 14-day Free Trial!

SELECT INTO Variable in Redshift

In Redshift, the SELECT INTO statement retrieves data from one or more database tables and assigns the values to variables. To assign a previously declared variable within a stored procedure or a RECORD type variable, use the Redshift SELECT INTO.

Redshift also selects and inserts rows from any query into a new table. You can choose between creating a temporary and a persistent table. This syntax is similar to the T-SQL SELECT INTO syntax used in Microsoft SQL Server.

Redshift SELECT INTO Syntax

Rows defined by any query are selected and inserted into a new table. You can choose between creating a temporary and a persistent table.

[ WITH with_subquery [, ...] ]
SELECT
[ TOP number ] [ ALL | DISTINCT ]
* | expression [ AS output_name ] [, ...]
INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table
[ FROM table_reference [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION | INTERSECT | { EXCEPT | MINUS } } [ ALL ] query ]
[ ORDER BY expression
[ ASC | DESC ]
[ LIMIT { number | ALL } ]
[ OFFSET start ]

See below for more information on the parameters of this command.

  • WITH: A WITH clause is an optional clause that comes before a query’s SELECT list. WITH specifies one or more common table expressions.
  • SELECT: The SELECT list specifies the columns, functions, and expressions that the query should return. The query’s output is represented by the list.
  • FROM: A query’s FROM clause lists the table references (tables, views, and subqueries) from which data is selected.
  • WHERE: The WHERE clause includes conditions that either joins tables or apply predicates to table columns.
  • GROUP BY: The GROUP BY clause specifies the query’s grouping columns.
  • HAVING: The HAVING clause adds a condition to the intermediate grouped result set returned by a query.
  • UNION, INTERSECT & EXCEPT: The set operators UNION, INTERSECT, and EXCEPT are used to compare and merge the results of two separate query expressions.
  • ORDER BY: The ORDER BY clause sorts a query’s result set.

Redshift SELECT INTO Example

Create a NEW EVENT table by selecting all of the rows from the EVENT table:

select * into newevent from event;

Enter the aggregate query result into a temporary table called PROFITS:

select username, lastname, sum(pricepaid-commission) as profit
into temp table profits
from sales, users
where sales.sellerid=users.userid
group by 1, 2
order by 3 desc;

Another Example

Redshift SELECT INTO - Example for Redshift SELECT INTO
Image Source

Conclusion

This blog goes into great detail about the Redshift SELECT INTO statement. It also provides an overview of Amazon Redshift before delving into the Redshift SELECT INTO statement.

The Redshift SELECT INTO command is simple to use and follows the PostgreSQL querying protocol. However, the user should be aware of some limitations. Most of the time, the query validation will not return an error. It may carry out its own automatic conversions.

To become more efficient in handling your Databases, it is preferable to integrate them with a solution that can carry out Data Integration and Management procedures for you without much ado and that is where Hevo Data, a Cloud-based ETL Tool, comes in. Hevo Data supports 100+ Data Sources and helps you transfer your data from these sources to Data Warehouses like Amazon Redshift in a matter of minutes, all without writing any code!

Visit our Website to Explore Hevo

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

Share your experience of understanding the Redshift SELECT INTO variable in the comments section below!

No-code Data Pipeline for Amazon Redshift