How to Issue a BigQuery Dry Run? 2 Easy Methods

on Data Warehouse, Google BigQuery • March 24th, 2022 • Write for Hevo

bigquery dry run - Featured Image

Organisations around the world are dealing with huge volumes of data daily and require a more powerful, flexible, reliable & secure solution. Cloud-based Data Warehousing & Analytics platforms such as Google BigQuery offers an economical and effective way to efficiently manage the growing data storage & processing needs. BigQuery offers a best-in-class Query performance that can effortlessly handle fluctuating workloads.

For optimally using your resources, it is good practice to do a Dry Run. A BigQuery Dry Run allows you to estimate your query costs as well as assists in validating the query itself. You can easily set up a BigQuery Dry Run using Google Cloud Console, bq commands, or via several of the programming languages supported by BigQuery.

In this article, you will learn about the BigQuery Dry Run feature and how to use it to estimate your costs via 2 simple methods.

Table of Contents

What is Google BigQuery?

bigquery dry run - Google BigQuery Logo
Image Source

Google BigQuery is a Data Warehouse offered as a completely managed service by Google Cloud Platform. Completely managed services like BigQuery relieves organizations of the headache of setting up and maintaining a data warehouse on their premises. BigQuery provides a comprehensive SQL layer and can handle petabytes of data with ease. Like most of the completely managed services, BigQuery provides separate compute and storage facilities. This helps in paying only for the storage and the compute you use.

It also helps in providing the ability to use the compute resources to process data stored in locations outside BigQuery like Google Cloud Storage. BigQuery comes with multi-cloud and hence can be deployed in any provider of your choice. BigQuery also comes with a host of additional features like support for machine learning in the query layer, geospatial analysis, natural language support in the querying layer, etc. 

Key Features of Google BigQuery

Google BigQuery has continuously evolved over the years and is offering some of the most intuitive features:

  • User Friendly: With just a few clicks, you can start storing and analyzing your data in Big Query. An easy-to-understand interface with simple instructions at every step allows you to set up your cloud data warehouse quickly as you don’t need to deploy clusters, set your storage size, or compression and encryption settings.    
  • On-Demand Storage Scaling: With ever-growing data needs, you can rest assured that it will scale automatically when required. Based on Colossus (Google Global Storage System), it stores data in a columnar format with the ability to directly work on the compressed data without decompressing the files on the go.
  • Real-Time Analytics: Stay updated with real-time data transfer and accelerated analytics as BigQuery optimally allocates any number of resources to provide the best performance and provide results so that you can generate business reports when requested.
  • BigQuery ML: Armed with machine learning capabilities, you can effectively design and build data models using existing SQL Commands. This eliminates the need for technical know-how of machine learning and empowers your data analysts to directly evaluate ML models.
  • Optimization Tools: To boost your query performance, Google provides BigQuery partitioning and clustering features for faster results. You also change the default datasets and table’s expiration settings for optimal storage costs and usage.   
  • Secure: BigQuery allows administrators to set access permissions to the data by groups and individuals. You can also enable row-level security for access to certain rows of a dataset. Data is encrypted before being written on the disk as well as during the transit phase. It also allows you to manage the encryption keys for your data.
  • Google Environment: Maintained and managed by Google, BigQuery enjoys the easy and fluid integrations with various applications present in the Google Ecosystem. With little to no friction at all, you can connect BigQuery to Google Sheets and Google Data Studio for further analysis.

What is the BigQuery Dry Run Feature?

To understand the value provided by the BigQuery Dry Run feature, you need to have a good idea about BigQuery pricing. Since BigQuery is a completely managed service, it does not require one to have control over the number of processing instances. The pricing is based on the usage and it gives one option to reserve processing usage and has discounted rates. 

BigQuery pricing is separately based on storage and analysis costs. Storage cost is fixed according to the total data that one stores. Analysis cost depends on the queries that one uses. In the case of analysis processing, the pricing can be an on-demand one or a flat rate one depending on whether you choose to reserve an instance. On-demand pricing is based on the amount of data processed by the queries. Flat rate pricing is based on the amount of query time that you reserve. It can be anything from 60 seconds to 365 days based on your preference. 

On-demand pricing is where BigQuery Dry Runs can help. This pricing is not based on the complete data that a table has, but based on the actual data that the query will process. This will vary based on numerous factors like columns referenced in your query, the clustering and partition configuration of the BigQuery table, etc. A BigQuery Dry Run returns the amount of data that is addressed by the query if it were to execute successfully. A BigQuery Dry Run also validates the query and throws the same error that the query console would have if the query was being executed really. The response from the dry run can thus be used to estimate the cost of a query. 

BigQuery charges 5$ per 1 TB of data addressed by the query. The first 1 TB of every month is free. 

Simplify Google BigQuery ETL using Hevo’s No-code Data Pipelines

Hevo Data, a No-code Data Pipeline helps to transfer data from 100+ sources such as to a Data Warehouse like Google BigQuery or Destination of your choice and visualize it in your desired BI tool. Hevo also supports Google BigQuery as a Source. Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without even having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. It allows you to focus on the key business needs and perform insightful analysis by using a BI tool of your choice.

Get Started with Hevo for Free

Check out what makes Hevo amazing:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!

Prerequisites

  • A Google Cloud Account.
  • Permissions to run a query job. Your account should have one of the three roles – ‘bigquery.admin’, ‘bigquery.jobUser’, or ‘bigquery.user’ .
  • Permissions to the specific table against which you want to issue a dry run. Your account should have one of the three roles for that particular table – ‘bigquery.dataEditor’, ‘bigquery.dataOwner’, ‘bigquery.dataViewer’.

How to do a BigQuery Dry Run?

A BigQuery Dry Run can be executed either via console or by using the bq command or using any of the programming SDKs that google cloud provides. You can follow the 2 methods given below to employ the BigQuery Dry Run feature:

Method 1: Issuing BigQuery Dry Run through the Console 

The google cloud console doesn’t have a well-defined mechanism to execute dry run. The result of the dry run is always displayed while validating the query. Follow the easy steps given below to issue your BigQuery Dry Run using the Console:

  • Step 1: Head to the BigQuery page in the cloud console by typing BigQuery in the search box. 
BigQuery Dry Run - Search BigQuery
Image Source – Self
  • Step 2: From the projects listed in the left-hand side tab, select the relevant one and click on the table against which you want to issue a dry run. The right-hand tab will now display the schema of the table.
BigQuery Dry Run - table Schema
Image Source – Self
  • Step 3: Click on the ‘QUERY’ button to bring up the querying editor. 
BigQuery Dry Run - Query button
Image Source – Self
  • Step 4: In the query editor window, type the required query. Let’s go with a simple select statement here. You will notice that a green tick and the amount of data processed will be displayed in the top right corner of the query editor as below.
BigQuery Dry Run - Dry Run Result
Image Source – Self

This is the result of the dry run being displayed. The value from this output can be used to estimate the cost of the query.

Method 2: Issuing BigQuery Dry Run using bq Commands

While using the bq command, a dry run can be performed by appending the –dry_run flag along with the bq command. For example, if you want to know the size of data processed in a select query, the bq command to execute will look as shown below:

bq query 
--use_legacy_sql=false 
--dry_run 
'SELECT
  name,
  age
FROM
  `project_id`.dataset.test_table1
LIMIT
  1000'

Once executed this will return the number of bytes that will be processed by the query. If the query has any problems, it will return the same error that the query processor would have returned if the query was actually executed.

Limitations of BigQuery Dry Run Feature

While BigQuery Dry Run offers a free, simple, and straightforward way to estimate the costs, it comes with certain limitations. 

  • Dry Run does not work reliably in the case of external tables. This is because there is no way for the command to know the exact data present in the external tables till the query is executed. Dry Run will still return a result, but that result can not be relied upon. 
  • Dry Run returns results in bytes. The Google cloud pricing calculator uses MBs to estimate the cost. So there is an additional step to convert the values to MBs that the pricing calculator can use.

Conclusion

In this article, you have learned how to easily do a BigQuery Dry Run using 2 different methods. A BihQuery Dry Run provides an easy way to validate your query and estimate the cost of the query by considering the amount of data that will be processed. The response from the BigQuery Dry Run can also be used to verify if the query that you created is matching your requirement. An abnormally low data size or high data size that does not match the business requirement that you are trying to handle can be considered as a red flag that indicates something wrong in your query. 

As you collect and manage your data across several applications and databases in your business, it is important to consolidate it for complete performance analysis of your business. However, it is a time-consuming and resource-intensive task to continuously monitor the Data Connectors. To achieve this efficiently, you need to assign a portion of your engineering bandwidth to Integrate data from all sources, Clean & Transform it, and finally, Load it to a Cloud Data Warehouse like Google BigQuery, BI Tool, or a destination of your choice for further Business Analytics. All of these challenges can be comfortably solved by a Cloud-based ETL tool such as Hevo Data.   

Visit our Website to Explore Hevo

Hevo Data is a No-code Data Pipeline that can seamlessly transfer data from a vast sea of 100+ sources to a Data Warehouse like Google BigQuery, BI Tool, or a Destination of your choice. Hevo also supports Google BigQuery as a Source. It is a reliable, completely automated, and secure service that doesn’t require you to write any code!  

If you are using Google BigQuery as your Data Warehousing & Analytics platform and searching for a no-fuss alternative to Manual Data Integration, then Hevo can effortlessly automate this for you. Hevo, with its strong integration with 100+ sources and BI tools(Including 40+ Free Sources), allows you to not only export & load data but also transform & enrich your data & make it analysis-ready in a jiffy.

Want to take Hevo for a ride? Sign Up for a 14-day free trial and simplify your Data Integration process. Do check out the pricing details to understand which plan fulfills all your business needs.

Tell us about your experience of setting up a BigQuery Google Dry Run! Share your thoughts with us in the comments section below.

No-code Data Pipeline for Google BigQuery