Snowflake is a widely used cloud-based data warehouse that efficiently stores and analyzes massive amounts of data. It facilitates high-performance data processing and is also cost-effective. However, when using any IT service, it is good practice to monitor how much you spend on it and where you need to cut down expenses to streamline your enterprise budget. If your organization uses Snowflake frequently, it is a good idea to adopt Snowflake cost optimization techniques to use the platform in the most resourceful way. 

This article explains Snowflake cost optimization in a comprehensive way and provides a detailed series of practices for efficiently utilizing Snowflake services at optimal prices.

Snowflake Pricing Structure

Snowflake cost optimization involves minimizing your expenses while leveraging the services of the platform. To do this, you should first understand Snowflake’s pricing model. You can break down the pricing structure depending on your usage of data transfer, compute, and storage resources as follows:

  • Compute Resources: You can use the compute resources only if you have Snowflake credits. The usage cost is calculated by multiplying the number of credits used by the price of one credit.
  • Storage Resources: To store data in Snowflake, you are charged monthly according to the rate per terabyte (TB). This rate is based on the average number of on-disk bytes stored daily in your Snowflake account. The present rates depend on the type of account (capacity or on demand) and your region (US or EU).
  • Data Transfer Resources: Snowflake does not charge for data transferred into its system but charges for sending data out of the system. It charges according to the per-byte transfer of data from Snowflake to an external destination on the same cloud platform or a different cloud platform. 

To optimize Snowflake’s usage cost, you have to identify the resources on which you are spending an extra amount and try to reduce it for effective budgeting. 

Best Practices for Snowflake Cost Optimization

You can achieve Snowflake cost optimization through the following best practices:

Optimize Snowflake Warehouse Size

Snowflake Cost Optimization: Snowflake Warehouse Sizes
Snowflake Warehouse Sizes

The Snowflake data warehouses are available in ten different sizes. The larger the size, the higher the usage cost because running queries requires more credits. The warehouses are made up of compute nodes. The X-Small consists of a single compute node, a small warehouse contains two nodes, a medium has four nodes, and so on. Each node consists of 8 threads to process data.

For Snowflake cost optimization, you should initially choose a smaller warehouse and adopt a trial-and-error approach to finding a suitable size. To use Snowflake warehouse mindfully, follow the below steps.

  • Group Together Similar Workloads in Similar Virtual Warehouses
Snowflake Cost Optimization: Consolidation of Data Workloads 
Consolidation of Data Workloads 

You should group similar workloads in similar virtual warehouses to implement the same customized configuration settings, such as auto-suspend, auto-resume, scaling policy, clusters, and statement timeout. It helps to optimize costs, as you do not need to configure every setting for similar workloads separately. 

For example, the auto-suspend feature is enabled after 600 seconds of inactivity by default in Snowflake. You can reduce this time to as low as 60 seconds to suspend the Snowflake usage quickly when not in use and thereby reduce the usage costs. You should group together workloads whose activity you want to suspend after 60 seconds for Snowflake cost optimization. 

  • Leverage Data SLAs to Define Workloads and Value to Business

You should talk with your organization’s senior teams and other important stakeholders to set up clear data SLAs (Service Level Agreements) beforehand. The discussion helps you understand their expectations from the datasets and related parameters. Once you know the demands of all the stakeholders, you can design the workloads accordingly and select only those Snowflake services that you need for the setup.  

  • Set Resource Monitors
Snowflake Cost Optimization: Setting Up Resource Monitor
Setting Up Resource Monitor

After setting up workloads, you should start using Snowflake with a trial-and-error approach by selecting small warehouses. Once you know the suitable size warehouse for your workload, you should set up resource monitors to avoid any discrepancies in Snowflake’s functioning. These resource monitors notify or terminate all activities automatically when your consumption reaches a threshold. As a good practice, you should set the threshold a little below the required limit to ensure that capacity does not create constraints in the warehouse’s workings. 

Optimize Snowflake Query

Query optimization is essential for Snowflake cost optimization because you need credits to query data in Snowflake. You can achieve this by accurately structuring your queries with the help of the following steps:

  • Use Query Tags

Your team should consistently use QUERY TAGS to tag SQL statements in Snowflake and quickly spot any discrepancies in your data pipelines. You can then stop the execution of queries, fix the issue, and prevent increased usage charges. 

  • Identify Expensive and Inefficient Queries
Snowflake Cost Optimization: Regulate Query Executions
Regulate Query Executions

You should identify queries that are expensive or heavier for execution in the Snowflake environment. For this, you can take the help of the Query Profile and Query_History features of Snowflake to understand the behaviors of various queries and stop their unwanted execution. You can also use query timeout settings in Snowflake to prevent longer execution of queries. It automatically cancels query execution if it runs beyond the timeout settings. 

Optimize Snowflake Table

It can be expensive to query large tables in Snowflake. You should eliminate unused tables from your Snowflake account to reduce Snowflake adoption costs. You can do this in the following manner:

  • Optimal Use of Native Snowflake Features for Efficient Table Querying

The native features of Snowflake, like table clustering and materialized views, can help you optimize the Snowflake table. 

Micro-partitions are small units of data storage in Snowflake tables. When data is ingested, it automatically gets arranged as clusters within the micro-partitions. However, this is discontinued over time as the size of tables increases. To cluster the incoming data, you can use the table clustering feature of Snowflake, which defines cluster keys, to co-locate identical rows from the same micro-partitions. The cluster keys are a subset of columns or expressions in a table. It is advisable to use cluster keys only when you require faster query performance for large tables, irrespective of the cost. You can also use them if it compensates for the credits required for table clustering. 

Snowflake enterprise edition provides a materialized view feature for faster query execution. It is a pre-computed data set deduced from a query specification (the SELECT in the view definition) and stored for later use. The materialized view is especially useful for executing complex and frequently used queries. However, its extensive usage can increase the overall cost of Snowflake. To avoid this, you should compare whether the additional cost is less than the cost required for frequent execution of the same queries.

  • Drop Unused Tables

Unused tables can consume a lot of your Snowflake credits. You should remove the tables that you do not use or are not adding value to your business. The unused tables can be identified and removed using Snowflake’s Enterprise edition. You can also use the TABLE_STORAGE_METRICS view to rank order by TOTAL_BILLABLE_BYTES to find and remove the unused tables with the highest storage costs. 

Reduce Query Frequency

You should avoid frequently running the same queries to prevent increased Snowflake usage costs. For example, the image below gives an estimate of the cost your organization can save when querying frequency is reduced.

Snowflake Cost Optimization: Query Run Frequency Costs
Query Run Frequency Costs

Lower Data Retention

Snowflake’s time travel feature allows you to store past versions of your datasets. While this provides easy access to historical data, it also increases storage costs. You should contemplate how much historical data you want to retain and adjust the time travel period accordingly to remove unnecessary data from Snowflake storage. 

Use Transient Tables

The fail-safe feature of Snowflake provides access to historical data for seven days after the time travel retention period ends. You should use it sparsely only when all other data recovery options are exhausted, which can add to storage costs. You can instead use transient tables in Snowflake to store your data in an easily accessible way. They do not have a fail-safe period but store data for some time beyond the usage session. 

Use Files of Optimal Size

Snowflake Cost Optimization: Optimal File Size Usage
Snowflake Cost Optimization: Optimal File Size Usage
Snowflake Cost Optimization: Optimal File Size Usage
Optimal File Size Usage

You should maintain file size in the range of 100-250 MB that you want to load to Snowflake. You can split a large-sized file into small sizes to reduce the storage charges. For instance, you want to store a 1 GB file in Snowflake in a small-sized warehouse. If you store the entire file in one of the 16 threads of the warehouse, it will saturate the thread. Instead, you can split the large file into ten smaller files and store them on ten threads of the warehouse. This also helps in better utilization of compute resources. 

Leverage Access Control

You should regulate the access control of your Snowflake account to prevent unexpected increases in costs if someone accidentally makes any modification to the data. For example, someone may increase the virtual warehouse size for certain operations and can forget to reduce it afterward. To avoid such scenarios, you should implement stricter access controls to allow only limited users to access the data. 

Snowflake Cost Optimization Practices Based on User Role

To use the platform judiciously, you can follow the practices below, such as a Snowflake cost calculator, based on your role as a beginner, intermediate, or expert Snowflake administrator. 

Beginner

  • Track Usage: It is recommended that you track Snowflake usage at multiple granularities, such as your organization’s level, workload, warehouses, users, tasks, etc. You can access this through Account and Organization usage views. 
  • Use UI-based Cost Exploration: To visualize Snowflake usage, you can go to the Usage interface to access dashboards displaying detailed usage trends and timelines of your usage data. You can also use the Snowflake Usage Insights Streamlit App to visualize usage data through scatter plots or flame graphs. 
  • Change to Snowpark-Optimized Warehouses for Memory-Intensive Workloads: You can use a Snowpark-optimized warehouse type for machine learning and memory-intensive data analytics. Compared to a standard data warehouse, a Snowpark-optimized warehouse provides 16X more memory and 10x more local SSD cache per VM, increasing computational speed. 

Intermediate

  • Use Tags to Implement a Chargeback or Showback Model: You should track the usage of Snowflake by any specific team, business unit, or project within your organization through tagging. A tag is a schema-level object in Snowflake created once and applied to different objects as required. It can help you monitor resource usage and implement chargeback or showback practices to hold each business unit accountable for their Snowflake usage.
    Use Multi-Cluster Warehouses: When you send too many queries in the Snowflake warehouse simultaneously, the compute resources get exhausted, and extra queries are queued until additional resources are available. Queued queries can increase your usage cost as they wait for processing in the warehouse. To reduce queuing, you should use multi-cluster warehouses.  

Snowflake allows you to specify the maximum and minimum number of clusters for any warehouse. The additional clusters start to function automatically when the query load increases beyond the normal limit and stop functioning when the query load decreases. Each cluster functions independently and handles different queries to optimize the warehouse’s performance.

While using a multi-cluster warehouse, you can opt for a standard scaling policy if you prefer faster querying over saving credits. For Snowflake cost optimization, you can choose economy scaling that conserves credits by fully loading the clusters, but it may impact query performance. 

Expert

  • Run Advanced Queries for Deeper Spend Visibility: To gain deeper insights into your Snowflake usage expenses, you should perform advanced querying on your Snowflake warehouses. You can use the “Most Expensive Queries” query to identify the most expensive query used in the last 30 days. You can also use the “Partner Tools Consuming Credits” query to identify those Snowflake partner tools that consume more credits. Additionally, you can use the “Replication Cost History” query to find out the credits consumed in replication over the last 30 days. The “Replication History & 7-Day Average” query can be used to know the average credits consumed for replication in the last week. 
  • Do Metrics-guided Warehouse Rightsizing: Metrics related to warehouse load, warehouse utilization, data spilling, and disk or object storage can help you optimize your compute resource costs. These metrics help identify which compute resources are essential. You can retain these resources and reduce the usage of non-essential resources. 

Role of Hevo in Snowflake Cost Optimization

Data integration can play an important role in Snowflake cost optimization in the following ways:

  • Regulating Data Volume: Data integration involves consolidating data from multiple sources to a unified destination. When integrating data with Snowflake, you can follow efficient data integration practices such as filtration, transformation, or data compression. These practices help you regulate the amount of data integrated into Snowflake, which is a deciding factor in storage costs. 
  • Query Efficiency: Data integration affects the efficiency of query execution in Snowflake. A well-structured dataset can result in faster query execution and consumption of fewer Snowflake credits. 
  • Load Frequency: Hevo enables you to transfer data periodically to the data warehouse destinations according to the destination’s loading frequency. You can leverage this feature while loading data to Snowflake using Hevo. For this, you can classify your data as low or high latency according to its importance and then transfer it to different destinations within the same Snowflake instance. 

The low latency data can be transferred to a high load frequency destination, and the less important data can be sent to a lower load frequency destination. This helps reduce the usage of Snowflake storage resources and facilitates cost optimization. 

Additionally, secure data integration can reduce the chances of data breaches, thereby reducing some preventable expenditures beforehand. You can leverage data integration tools like Hevo Data, which offers robust data encryption and transformation features for secure and seamless Snowflake integration.  

Hevo Data is a no-code ELT platform that provides real-time data integration and a cost-effective way to automate your data pipeline workflow. With over 150+ source connectors, you can integrate your data into multiple platforms, conduct advanced analysis on your data, and produce useful insights.

Some other important features of Hevo Data that make it suitable for Snowflake cost optimization are:

  • Data Transformation: Hevo Data provides you the ability to transform your data for analysis with a simple Python-based drag-and-drop data transformation technique. You can leverage this feature to minimize the use of Snowflake’s compute resources for data transformation. 
  • Automated Schema Mapping: Hevo Data automatically arranges the destination schema to match the incoming data, improving data quality by standardizing the data structure. You can take advantage of this feature to regulate the utilization of storage and compute resources for cost optimization. 
  • Incremental Data Load: Hevo’s incremental data loading capabilities facilitate loading only new or updated data to the target Snowflake data warehouse. You can use it to eliminate the storage and querying of unnecessary data, reducing storage and compute resource usage expenses. 

Thus, data integration is essential to Snowflake cost optimization strategy, and achieving it through the Hevo Data platform can simplify your expenditure management efforts. 

Learn More About:

Understanding Snowflake Scaling Policy

Parting Thoughts

This blog provides you with in-depth information on Snowflake cost management and a comprehensive list of best practices for Snowflake cost optimization. Since cost optimization is an ongoing process and data integration is a vital part of Snowflake’s cost-effective usage, we suggest you give Hevo Data a try!

FAQs

  1. How to calculate Snowflake costs?

You can calculate the total monthly Snowflake costs by using the following formula:

storage cost + (minutes used * cost per node * nodes per cluster) by cluster.

You should add the information about the nodes and clusters used according to your platform’s architecture to calculate the approximate minutes used for the platform. You can enter all this information in the above formula to get an estimate of the monthly expense of using Snowflake. 

  1. Does Snowflake charge for storing data in cache?

Snowflake does not charge you to store your data in the cache. You can access this cached data easily and resume queries anytime. It helps save time, processing power, and costs of Snowflake usage. 

Sarthak Bhardwaj
Customer Experience Engineer, Hevo

Sarthak is a skilled professional with over 2 years of hands-on experience in JDBC, MongoDB, REST API, and AWS. His expertise has been instrumental in driving Hevo's success, where he excels in adept problem-solving and superior issue management. Sarthak's technical proficiency and strategic approach have consistently contributed to optimizing operations and ensuring seamless performance, making him a vital asset to the team.