Amazon Redshift is a completely managed data warehouse offered as a service. Redshift works based on a cluster architecture and it allows the users to select from multiple kinds of instances available depending on their use cases. Read more about Redshift’s feature here.

Our intention in this post is to look in detail at the Amazon Redshift INSERT INTO statement. The post covers the following:

Table of Contents

Introduction to Amazon Redshift

Image Source

It is a columnar database with a PostgreSQL standard querying layer. Redshift’s biggest selling point is flexibility. It provides the customer though its ‘pay as you go’ pricing model. Redshift is tailor-made for executing lightning-fast complex queries over millions of rows of data. The customer is also relieved of all the maintenance and infrastructure management activities related to keeping a highly available data warehouse running.

Key Features of Amazon Redshift

  • Enhanced Scalability: Amazon Redshift is known for providing consistently fast performance, even in the face of thousands of concurrent queries. Amazon Redshift Concurrency Scaling supports nearly unlimited concurrent queries and users. By leveraging Redshift’s managed storage, capacity is added to support workloads of up to 8 PB of compressed data. Scaling is just a simple API call, or a few clicks in the console away.    
  • Easy Management: Amazon Redshift automates oft-repeated maintenance tasks so that you can focus on gathering actionable insights from your data. It is fairly simple to set up and operate. A new Data Warehouse can be deployed with just a few clicks in the AWS console. Key administrative tasks like backup and replication are automated. Data in Amazon Redshift is automatically backed up to Amazon S3. Amazon Redshift can replicate your snapshots to Amazon S3 asynchronously in a different region for disaster recovery. The Automatic Table Optimization selects the best distribution keys and sort method to enhance the performance efficacy for the cluster’s workload. Amazon Redshift also gives you the flexibility to work with queries in the console, or Business Intelligence tools, libraries, and SQL client tools.  Also check out Redshift Sortkeys article.
  • Robust Security: Amazon Redshift is known for providing robust data security features at no extra cost. Amazon Redshift allows you to configure firewall rules to take control of network access to a specific Data Warehouse Cluster. Amazon Redshift also specializes in granular column and row-level security controls that ensure that users can only view data with the right type of access. Apart from these, Amazon Redshift also delivers on its promise of reliability and compliance through tokenization, end-to-end encryption, network isolation, and auditing.  
  • Flexible Performance: Amazon Redshift distinguishes itself by offering swift, industry-leading performance with a keen focus on flexibility. This is made possible through result caching, materialized views, efficient storage, RA3 instances, and high-performance query processing to name a few. Result Caching is used to deliver sub-second response times for repeat queries. Business Intelligence tools, dashboards, visualizations leveraging repeat queries experience a significant performance boost.
Image Source

Understanding the Need for Redshift INSERT INTO Statement 

Image Source

Being a data warehouse offered as a service, Redshift is typically used as part of an extract-transform-load pipeline. This means there is an obvious need to insert rows of data into Redshift continuously depending on the incoming data rate.

These continuous inserts may not be a typical use case for bulk insert since the data as a whole may not be at rest and maybe part of intermittent streams. In such cases, the available options are to either write data to Redshift as individual rows or create batches of rows for a multi-row insert. In both these cases, the Redshift INSERT INTO statement is the one to use. Let’s look into the details of how to use the Redshift Insert INTO statement.

Simplify your Data Analysis with Hevo’s No-code Data Pipeline

A fully managed No-code Data Pipeline platform like Hevo Data helps you integrate and load data from 100+ sources (including 30 Free Data Sources) to a destination like Redshift of your choice in real-time in an effortless manner. 

Get Started with Hevo for Free

Hevo with its minimal learning curve can be set up in just a few minutes allowing the users to load data without having to compromise performance. Its strong integration with umpteenth sources allows users to bring in data of different kinds in a smooth fashion without having to code a single line. 

Check out some of the cool features of Hevo:

  • Completely Automated: The Hevo platform can be set up in just a few minutes and requires minimal maintenance.
  • Transformations: Hevo provides preload transformations through Python code. It also allows you to run transformation code for each event in the Data Pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. Hevo also offers drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
  • Connectors: Hevo supports 100+ integrations to SaaS platforms such as Trello, files, databases, analytics, and BI tools. It supports various destinations including Amazon Redshift, Snowflake Data Warehouses; Amazon S3 Data Lakes; and MySQL, SQL Server, TokuDB, DynamoDB, PostgreSQL databases to name a few.  
  • Real-Time Data Transfer: Hevo provides real-time data migration, so you can have analysis-ready data always.
  • 100% Complete & Accurate Data Transfer: Hevo’s robust infrastructure ensures reliable data transfer with zero data loss.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources that can help you scale your data infrastructure as required.
  • 24/7 Live Support: The Hevo team is available round the clock to extend exceptional support to you through chat, email, and support calls.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Live Monitoring: Hevo allows you to monitor the data flow so you can check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!

Understanding the Usage and Examples of Redshift INSERT INTO Statement

Redshift INSERT INTO follows the below template.

INSERT INTO table_name [ ( column [, ...] ) ]

{DEFAULT VALUES |

VALUES ( { expression | DEFAULT } [, ...] )

[, ( { expression | DEFAULT } [, ...] )

[, ...] ] |

query }

In the above template, let’s understand the purpose of each of the parameters. 

Redshift allows the user to specify an optional list of columns. If the columns are specified, the values need to be entered in the same order. If the columns are not specified, all the columns of the table should have an entry in the value list and in the same order of the columns. 

Download the Cheatsheet on How to Set Up High-performance ETL to Redshift
Download the Cheatsheet on How to Set Up High-performance ETL to Redshift
Download the Cheatsheet on How to Set Up High-performance ETL to Redshift
Learn the best practices and considerations for setting up high-performance ETL to Redshift

DEFAULT VALUES is a keyword which, if used with Redshift INSERT INTO statement will insert all the columns with a default value that was set during the table creation. 

Redshift allows the use of an expression instead of the column value. Redshift will automatically insert the value after evaluating the expression and converting the data type to match the column data type

The user can also provide the ‘DEFAULT’ keyword if he intends to write the default value for any column.

Redshift INSERT INTO statement also allows the use of a query instead of the ‘values’. Redshift will execute the query and insert all the resultant rows of the query, provided the results are compatible with the table column structure. 

Let us now look into some specific examples of single and multi-row inserts. Consider the below table called employee_records with fields emp_id, department, designation, and category.

Single row insert

INSERT INTO employee_records(emp_id,department,designation,category) values(1,’admin’,’assistant’,’contract’);

Single row without column list

INSERT INTO values(1,’admin’,’assistant’,’contract’);

The advantage with this kind of INSERT INTO is obviously the shorter query length. The disadvantage is that this is error-prone and the validation will fail if there is a data type mismatch caused by a change in the order of columns.  It is recommended to use INSERT INTO with a column list as much as possible.

Multi-row insert

INSERT INTO employee_records(emp_id,department,designation,category) values(1,’admin’,’assistant’,’contract’),

values(2,’admin’,’senior assistant’’,’permanent’),

values(3,’security’,’manager’’,’permanent’);

Even though it is possible to insert a large number of rows through this method, it is advised to use the COPY command instead of this if the use case is a bulk insert. The COPY command has inherent optimizations built-in for a faster load of data.

INSERT INTO using query

For this example consider there is another table called employee_records_stg with the same structure. To insert values to this table, use the below statement.

INSERT INTO employee_records_stg (SELECT * FROM employee_records)

Understanding the Limitations of Redshift INSERT INTO Statement

Even though INSERT INTO is the recommended way of inserting rows when it comes to an intermittent stream of records, it does have its share of limitations.

  1. The user needs to be extremely careful about the data types of incoming data and the table structure since Redshift will automatically convert data types to conform to table structure wherever possible. 
  2. It is not suitable for large amounts of data and trying so may result in painfully slow inserts.
  3. Using this statement without the column list is error-prone because of the default behaviour in case the value list does not match with the column structure. If the value list is less than the column list, Redshift will try to insert the values to first n columns and will not give an error if the data types are compatible.

Conclusion

This blog talks about the Redshift Insert Into statement in great detail. It also gives a brief introduction to Amazon Redshift before diving into the Redshift Insert Into statement.

Redshift INSERT INTO command is very easy to use and conforms to the standard PostgreSQL querying protocol. However, the user should be mindful of the above-mentioned limitations. Most of the time the query validation may not even provide an error. It might perform its own automatic conversions. When it comes to data warehouses with millions of rows, such discrepancies and errors are difficult to hunt and fix. One of the best ways to steer clear of such possible issues is to use a fully managed Data Integration Platform such as Hevo. 

Visit our Website to Explore Hevo

Integrating complex data from numerous sources into a destination of your choice such as Amazon Redshift can be challenging especially for a beginner & this is where Hevo saves the day.

Hevo Data is built to handle all the limitations posed by INSERT INTO. Hevo can help load data from any data source to Redshift through a very easy to use interface. Hevo provides a code-free, hassle-free platform to load data into Redshift in real-time with zero data loss.

Explore the complete features here or sign up for a 14-day free trial to experience the platform.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

How do you currently move data to Redshift? Let us know in the comments.

mm
Founder and CTO, Hevo Data

Sourabh has more than a decade of experience building scalable real-time analytics and has worked for companies like Flipkart, tBits Global, and Unbxd. He is experienced in technologies like MySQL, Hibernate, Spring, CXF, php, ExtJS and Shell.

No-code Data Pipeline for Redshift

Get Started with Hevo