Redshift Create User Command: Syntax, Parameters, and 5 Easy Examples



Amazon Redshift is one of the trusted and most popular Data Warehouse solutions currently available in the market. It works flawlessly with SQL commands and provides a wide range of functionality ranging from storing the data to analyzing it.

Amazon Redshift Users are created by users with privileges known as Superusers and they use the command known as Redshift CREATE USER command.

This article gives a comprehensive guide on the Redshift CREATE USER command, showing its syntax, parameters, and use cases.

Table of Contents

What is Amazon Redshift?

redshift create user: redshift logo
Image Source:

Redshift is a columnar database that provides OLAP (Online Analytical Processing). It is designed on PostgreSQL version 8.0.2. This means allows the users to use regular SQL queries with Redshift. It utilizes the concept of massively parallel processing that allows for Fast queries. This technology was developed by ParAccel. The MPP technology uses multiple computer processors that work in parallel to provide the necessary calculations. Even the processors located in multiple servers are used collectively to process heavy tasks. The main benefit of using AWS Redshift is the cost-benefit to your business. The cost is one-fifth (about one-twentieth) of competitors such as Teradata and Oracle. 

Benefits of Using Amazon Redshift

redshift create user: redshift benefits
Image Source:
  • Speed: Speed of processing and executing a large number of queries and data is made possible y the use of MPP technology. The cost AWS provides for services is unmatched by other cloud service providers.
  • Data encryption:  data encryption for data of your Redshift operation is provided by the Amazon server. The user can decide which processes need to be encrypted and which ones do not. Data encryption provides an additional layer of security. 
  • Familiarity: Redshift is predicated on PostgreSQL. All SQL queries work with it. additionally, you’ll choose the SQL, ETL (extract, transform, load), and business intelligence (BI) tools you’re conversant in. you’re not forced to use the tools provided by Amazon.
  • Smart optimization: If your dataset is large, there are several ways to question the info with equivalent parameters. Different commands have different levels of knowledge usage. AWS Redshift provides tools and knowledge to enhance your queries. These are often used for faster and more resource-efficient operations.
  • Automate repetitive tasks: Redshift has the power to automate tasks that require to be repeated. this will be an administrative task like creating daily, weekly, or monthly reports. this will be a resource and price review. It also can be a daily maintenance task to wash up your data. you’ll automate all of this using the actions provided by Redshift.
  • Simultaneous scaling: AWS Redshift automatically scales up to support the expansion of concurrent workloads.
  • Query volume: MPP technology shines in this regard. you’ll send thousands of queries to your dataset at any time. Still, Redshift isn’t slowing down. Dynamically allocate processing and memory resources to handle increasing demand.
  • AWS integration: Redshift works well with other AWS tools. you’ll find out integrations between all services, counting on your needs and optimal configuration.
  • Redshift API: Redshift features a robust API with extensive documentation. It is often wont to send queries and obtain results using API tools. The API also can be utilized in Python programs to facilitate coding.
  • safety: Cloud security is handled by Amazon, and application security within the cloud must be provided by the user. Amazon offers access control, encoding, and virtual private clouds to supply a further level of security.
  • Machine learning: machine-learning concepts are employed by Redshift to predict and analyze queries. additionally, to MPP, this makes Redshift perform faster than the other solution on the market.
  • Easy deployment: Redshift clusters are often deployed anywhere within the world from anywhere in minutes. In minutes, you will have a strong data warehousing solution at a fraction of the worth of your competitors.
  • Consistent backup: Amazon automatically backs up your data on a daily basis. It is often used for recovery in the event of a mistake, failure, or damage. Backups are distributed in several locations. This completely eliminates the danger of confusion on your site.
  • AWS analytics: AWS offers many analytical tools. All of this works well with Redshift. Amazon provides support for integrating other analytics tools with Redshift. Redshift being the kid of the AWS community has native integration capabilities with AWS analytics services.
  • Open format: Redshift can support and supply output in many open formats of knowledge. the foremost commonly supported formats are Apache Parquet and Optimized Row Columnar (ORC) file formats.
  • Partner ecosystem: AWS is one of the primary cloud service providers that started the market of Cloud Data Warehouses. many purchasers believe Amazon for its infrastructure. additionally, AWS features a strong network of partners to create third-party applications and supply implementation services. you’ll also leverage this partner ecosystem to ascertain if you’ll find the simplest implementation solution for your organization.

Learn more about Amazon redshift.

Simplify Data Analysis with Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline helps to load data from any data source such as Databases, SaaS applications, Cloud Storage, SDKs, and Streaming Services and simplifies the ETL process. It supports 100+ data sources (including 30+ free data sources) like Asana and is a 3-step process by just selecting the data source, providing valid credentials, and choosing the destination. Hevo not only loads the data onto the desired Data Warehouse/destination but also enriches the data and transforms it into an analysis-ready form without having to write a single line of code.


Its completely automated pipeline offers data to be delivered in real-time without any loss from source to destination. Its fault-tolerant and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. The solutions provided are consistent and work with different BI tools as well.

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.

Redshift CREATE USER command

Redshift Create user command is used to create a new user account for the database. Only a superuser has the privilege to run this command and create a new user.

Redshift CREATE USER command: Syntax

PASSWORD { 'plain text' | 'md5hash' | 'sha256hash' | DISABLE }
[ option [ ... ] ]

Redshift CREATE USER command: Basic Parameters

NAME: this parameter defines the name of the user that is created using the Redshift Create user command for the database.

PASSWORD: set the user Password in the Redshift Create User Command. the default settings ensure the user can change their password, but it can be disabled.

  • the DISABLE is used to disable the option to restrict the user from changing his password. The password is deleted from the system When a user’s password is disabled. The user can log on only using temporary AWS Identity and Access Management (IAM) user credentials. The authority to enable or disable passwords is with the superuser. You can’t disable a superuser’s password.
  • If the password is plain text, it needs to meet certain criteria:
    • the length of the password must be 8 to 64 characters in length.
    • The password must include at least one numeric value, one lowercase alphabet, and one uppercase alphabet.
    • All the ASCII characters with codes ranging from 33-126 exception being single quote (‘), double quote (“),,/ or @.
  • The password of Redshift Create User can also be an MD5 hast of strings which is much better than plain text. The process to use MD5 as the password:
    • Concatenate Username and password. For example, if the user name is USER1 and the Password is EZ, the concatenated string will follow the structure of PASSWORD+USERNAME, this results in ezuser1.
    • The concatenated string then needs to be converted into an MD5, 32-character hash string. there are many options that can be used to create a string to MD5, the following example uses Amazon’s MD5 function present in Redshift. The concatenation function (||) is also used to return a 32-character string.
select md5('ez' || 'user1');
  • continue…
    • By adding ‘md5’ and concatenating it to the front of MD5 sting in the previous step and passing the Concatenated string as md5hash argument
create user user1 password 'md5153c434b4b77c89e6b94f12c5393af5b';
  • continue…
    • Log in to the Amazon account using the generated hash password.
  • SHA-256 hash is also a secure option that can be used instead of a plain text password. SHA-256 consists of two parts, SALT and DIGEST.
    • valid SHA-256 digest and 256-bit salt that was used to create the digest will be used to create a valid SHA-256 Hash Password.
      • The Digest is the output of a hashing function.
      • Salt is Randomly generated data that is combined with the password to help reduce patterns in the hashing function output.
  • the syntax for SHA-256:


  • examples for SHA-256 passwords
    • In this example, the amazon redshift self manages SALT and DIGEST aspects.
CREATE USER admin PASSWORD 'sha256|Mypassword1';
  • example
    • In this example, valid SALT and DIGEST were used to generate password
CREATE USER admin PASSWORD 'sha256|fe95f2bc7c4a111b6f0f7d0b60bfedd1935fb295f8dce1d62708ab8d2f564baf|c721bff5d9042cf541ff7b9d48fa8a6e545c19a763e3710151f9513038b0f6c6';

NOTE: If you set a password in plain text without specifying the hashing function, then an MD5 digest is generated using the username as the salt.

Redshift CREATE USER command: Options

there are multiple parameters or options that can be used in Redshift CREATE USER.

CREATEDB | NOCREATEDBThis parameter gives privileges to a new user to create databases. by default it is NOCREATEDB hence for providing permission it needs to be mentioned initially.
CREATEUSER | NOCREATEUSERThe CREATEUSER option can be used to create a superuser that has all the database privileges, including Redshift CREATE USER. The default is NOCREATEUSER.
It states the level of access a user has on Amazon Tables.
RESTRICTED: Only the rows in the user-visible table can be seen by the user.
UNRESTRICTED: all the rows including the rows generated by other users can be seen in user-visible system tables and views.
UNRESTRICTED doesn’t give a regular user access to superuser-visible tables. Only superusers can see superuser-visible tables.
IN GROUP groupnameThis parameter defines the name of the group that the user will belong to. multiple groups can be listed at once.
VALID UNTIL abstimeThis parameter defines the maximum time password will stay valid after creation. by default, there is no time limit.
CONNECTION LIMIT { limit | UNLIMITED }This parameter defines the maximum number of database connections that can be opened concurrently. Superusers have UNLIMITED database connections.
SESSION TIMEOUT limitThis parameter defines the maximum th=ime the session stays active. The minimum time for the session is 60sec (1 min) to 1,728,000 seconds (20 days). the default is decided by the cluster.

Redshift CREATE USER command: Examples

Redshift CREATE USER Creates a user with the name hevo, with the password “abcD1234”, database creation privileges, and a connection limit of 30.

CREATE USER hevo with password 'abcD1234' createdb connection limit 30;

when the PG_USER_INFO catalog table is queried, it views the details of all the database users.

select * from pg_user_info;
usename   | usesysid | usecreatedb | usesuper | usecatupd | passwd   | 
 rdsdb    |        1 | true        | true     | true      | ******** |          
 adminuser|      100 | true        | true     | false     | ******** |          
 hevo     |      102 | true        | false    | false     | ******** |          

valuntil | useconfig | useconnlimit
infinity |           |             
         |           | UNLIMITED   
         |           | 30                

the account password is valid until DEC 10, 2022.

CREATE USER hevo with password 'abcD1234' valid until '2022-12-10';

Redshift CREATE USER creates a user with username hevo with a case-sensitive password that contains special characters.

CREATE USER hevo with password '@AbC4321!';

To use a backslash (”) in your MD5 password in the Redshift CREATE USER command, escape the backslash with a backslash in your source string. The following example creates a user named hevo with a single backslash ( ”) as the password.

select md5(''||'slashpass');
CREATE USER hevo password 'md50c983d1a624280812631c5389e60d48c';

Redshift CREATE USER command creates a user named hevo with session timeout set to 120 seconds.

CREATE USER hevo password 'abcD1234' SESSION TIMEOUT 120;


Redshift CREATE USER command offers a wide range of options that can allow providing privileges to a new user that has been added to the database.

Redshift is a trusted data warehouse that a lot of companies use and store data as it provides many benefits but transferring data into it is a hectic task. The Automated data pipeline helps in solving this issue and this is where Hevo comes into the picture. Hevo Data is a No-code Data Pipeline and has awesome 100+ pre-built Integrations that you can choose from.

visit our website to explore hevo[/hevoButton]

Hevo can help you Integrate your data from numerous sources and load them into a destination to Analyze real-time data with a BI tool such as Tableau. It will make your life easier and data migration hassle-free. It is user-friendly, reliable, and secure.

SIGN UP for a 14-day free trial and see the difference!

Share your experience of learning about Redshift CREATE USER in the comments section below.

Former Research Analyst, Hevo Data

Arsalan is a data science enthusiast with a keen interest towards data analysis and architecture and is interested in writing highly technical content. He has experience writing around 100 articles on various topics related to data industry.

No-code Data Pipeline For Your Data Warehouse