There are only two hard things in Computer Science: cache invalidation and naming things.

Phil Karlton

You have finally chosen Snowflake as your centralized data warehousing solution to bring in data from all your data sources. Now comes the “organizing Snowflake” part. You want to select the type of accounts, databases, and schemas. Based on the Snowflake naming conventions, you will also need to select names for your objects describing their contents and use. This is essential as it promotes consistency among users and allows you to easily navigate to the right object in case of an error.

But what are these naming conventions for Snowflake objects? Well, look no further! We have compiled all the best practices you will require in Snowflake.  Whether you want to rename objects in your existing Data Warehouse or plan it for the very first time, this 7-minute read has all the Snowflake Naming Conventions best practices you need.

Table of Contents

Snowflake Naming Conventions for Account-Level Objects

Snowflake Naming Conventions:  Snowflake Objects
Image Source

Account Level Objects are the non-database objects, i.e., users, roles, warehouses, databases, etc. Commonly, you might be working with different environments in a single account. Hence, as a good practice in Snowflake Naming Conventions, you can add the environment prefixes, such as SIT(System Integration Testing), UAT(User Acceptance Testing), QA(Quality Assurance), etc., to the object name as a differentiator. For instance, you can add DEV, PRD, and TEST for development, production, and testing environments. It is suggested to include prefixes for your database names.

Snowflake Naming Conventions: Different Environments
Image Source

Now that you have a general idea of adding prefixes to the object names based on different environments, let’s go deeper into the Snowflake Naming Conventions of the individual objects:

User Level

Different from the login name, a user name is a unique identifier for a user. There are 2 types of accounts, namely the user account used by real and the service account required by system services. 

Account TypeNaming ConventionDescriptionExamplePermissions
User[Email]Email addresses within an organizationuser@company.comUSERADMIN
Service[ENV]_[PROJECT/PROGRAM]_[APP_CODE]_USEREnvironment_ project/program short code_ application name or short code_USER postfixTEST_HEVODATA_APP_USERUSERADMIN

Role Level

Snowflake Naming Conventions: Different Roles
Image Source

Roles are privileges granted to users on securable objects that allows them to execute certain actions needed for business functions in your organization. For USER Roles, environment prefixes are generally not used. This is because you can control access to multiple environments using grants. However, in cases of Service Roles, you would need prefixes so that access is not shared between environments.

Account Level ObjectNaming ConventionDescriptionExamplePermissions
Role[PROJECT/PROGRAM]_[ROLE_NAME]_ROLEProject Name_Role of user in the project_ROLE postfixHEVODATA_ANALYST_ROLE
HEVODATA_DEVELOPER_ROLE
USER ADMIN

Warehouse Level

When creating a virtual warehouse with the desired resources such as CPU, memory, and temporary storage for performing several operations, you can go along with the following Snowflake Naming Conventions:

Account Level ObjectNaming ConventionDescriptionExamplePermissions
Virtual Warehouse[ENV]_[PROJECT/PROGRAM]_[ROLE_NAME]_[WH_SIZE]Environment_Project Name_User Role_Warehouse SizeDEV_HEVODATA_DBT_WH_XLSYSADMIN

Database Level

You can have multiple schemas in a Database in your Snowflake Account. You can follow the given Snowflake naming conventions for your database:

Account Level ObjectNaming ConventionDescriptionExamplePermissions
Database[ENV]_[PROJECT/PROGRAM]_[DATA_LAYER]_DBEnvironment_Project name_Data Layer Name_DB postfixDEV_HEVODATA_STAGING_DBSYSADMIN

Snowflake Naming Conventions for Schema-Level Objects

When you name schema-level objects, you can leave out the environment prefix as commonly, there aren’t multiple environments inside a single database. Though, it is good practice to add a prefix that describes the schema’s purpose:

  • LND: To indicate a landing schema used to hold newly ingested data.
  • RAW: To indicate a raw staging area.
  • INT: To indicate an integration area where raw data is combined and cleaned before analysis.
  • MRT: To indicate a data mart holding conformed and cleaned data ready for reporting.
  • WRK: To indicate workbench schemas, which provide a sandbox for data analysts.

As a standard, you can go with the following Snowflake Naming Conventions for schemas:

Naming Convention[DATA_GROUPING]
DescriptionGroupings by source name
ExampleDEV_HEVODATA_RAW.HUBSPOT
PermissionsCREATE SCHEMA

To know in detail about each schema-level object, you can go through the table given below:

Object TypeNaming ConventionExample
Table[TABLE_NAME]DEVICE
View[VIEW_NAME]DATE
Integration[NAME]_[TYPE]_[SUB_TYPE]_INTSALESFORCE_API_AWS_INTG
File Format[NAME]_[FORMAT]_FFSALESFORCE_XML_FF
Stage[NAME]_[TYPE]_STGSALESFORCE_S3_STG
Snowpipe[NAME]_PIPESALESFORCE_PIPE
Stored Procedure[NAME]_SPSALESFORCE_SP
Stream[NAME]_STREAMSALESFORCE_STREAM
Task[NAME]_TASKSALESFORCE_TASK

Snowflake Naming Conventions Constraints

While naming your objects such as tables, views & columns, you do have to check that the naming is done according to Snowflake’s constraints, such as:

  • A name can contain up to 128 characters.
  • The first character in a name can be a letter, @, _, or #.
  • A name must begin with a letter (A through Z), diacritic marks, non-Latin characters (200–377 octal), or an underscore (_).
  • To enable case sensitivity, enclose names in quotation marks. All references to quoted names must always be enclosed in quotation marks.
  • A name cannot be a reserved word in Snowflake such as WHERE or VIEW.
  • A name cannot be the same as another Snowflake object of the same type.

Bringing It All Together

Awesome, you finally named all your Snowflake Objects. The intuitive Snowflake Naming Conventions are easy to adapt and allow you to quickly learn about the object just by its name. But one challenge is still haunting everyone on the engineering team. You have the data store, but you still need to bring in data from multiple sources. When the new data connector requests start coming in, you will have to invest at least 40-60% of bandwidth to create and maintain custom data pipelines.  

Apart from doing your primary engineering goals, you need to be on the lookout for any data leakage and fix it on priority. This eventually becomes a resource and time expensive task.

No worries! There are already cloud-based solutions for this also that completely automate the data integration without requiring you to write any code. For instance, you can hop onto a smooth ride with a No-Code ETL tool like Hevo Data and enjoy 150+ plug n play integrations.

Visit our Website to Explore Hevo

No need to go to your Snowflake data warehouse for post-load transformations. You can simply run complex SQL transformations from the comfort of Hevo’s interface and get your data in the final analysis-ready form.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and simplify your data integration process. Check out the pricing details to understand which plan fulfills all your business needs.

Sanchit Agarwal
Former Research Analyst, Hevo Data

Sanchit Agarwal is a data analyst at heart with a passion for data, software architecture, and writing technical content. He has experience writing more than 200 articles on data integration and infrastructure. He finds joy in breaking down complex concepts in simple and easy language, especially related to data base migration techniques and challenges in data replication.

No-code Data Pipeline for Snowflake

Get Started with Hevo