BigQuery TRUNCATE TABLE Command Made Easy

By: Published: December 21, 2021

BigQuery TRUNCATE TABLE- Featured Image

Today, data is a key component in every organization. The reason is that organizations have realized that data is a very valuable commodity. Data can help enterprises to unearth unknown patterns and trends, which can in turn help to support the decision-making process in the enterprise. This can lead to improved productivity and profitability. 

Google BigQuery is a Cloud Data Warehouse solution that provides businesses with a Data Storage Platform. It also provides them with a Query Engine that they can use to query for their stored data. BigQuery is scalable, allowing organizations to scale their storage up and down based on their changing needs. When using BigQuery, you will want to delete data from some of your tables, probably to free up storage. There are different ways to do this- some of which are free while others where you will incur a cost. In this article, we will be discussing how to delete data from a BigQuery table using the BigQuery TRUNCATE TABLE command. 

Table of Contents


This is what you need for this article:

  • A Google BigQuery account. 

What is Google BigQuery?

Google BigQuery
Image Source

Google BigQuery is a Highly Scalable & Serverless Data Warehouse with a built-in Query Engine. It was developed by Google, hence, it uses the processing power of Google’s infrastructure. The Query Engine can run SQL queries on terabytes of data within seconds, and petabytes within minutes. 

BigQuery gives you this performance without the need to maintain the infrastructure or rebuild or create indexes. BigQuery’s speed and scalability make it suitable for use in processing huge datasets. It also comes with built-in Machine Learning Capabilities that can help you to understand your data better. 

With BigQuery, you can achieve the following:

  • Democratize insights with a scalable and secure platform that comes with Machine Learning features. 
  • Improve business decision-making from data using a multi-cloud and flexible analytics solution. 
  • Adapt to data of any size, from bytes to petabytes, with no operational overhead. 
  • Run large-scale analytics.  

BigQuery also allows you to create dashboards and reports that you can use to analyze your data and gain meaningful insights from it. It is also a powerful tool for Real-time Data Analytics. Find more information on Google BigQuery here- BigQuery Tutorial: A Comprehensive Guide.

Simplify BigQuery ETL and Analysis with Hevo’s No-code Data Pipeline

A fully managed No-code Data Pipeline platform like Hevo Data helps you integrate and load data from 100+ different sources (including 40+ free sources) to a Data Warehouse such as Google BigQuery or Destination of your choice in real-time in an effortless manner. Hevo with its minimal learning curve can be set up in just a few minutes allowing the users to load data without having to compromise performance. Its strong integration with umpteenth sources allows users to bring in data of different kinds in a smooth fashion without having to code a single line. 

Get Started with Hevo for Free

Check out some of the cool features of Hevo:

  • Completely Automated: The Hevo platform can be set up in just a few minutes and requires minimal maintenance.
  • Transformations: Hevo provides preload transformations through Python code. It also allows you to run transformation code for each event in the Data Pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. Hevo also offers drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
  • Connectors: Hevo supports 100+ integrations to SaaS platforms, files, Databases, analytics, and BI tools. It supports various destinations including Google BigQuery, Amazon Redshift, Snowflake Data Warehouses; Amazon S3 Data Lakes; and MySQL, SQL Server, TokuDB, DynamoDB, PostgreSQL Databases to name a few.  
  • Real-Time Data Transfer: Hevo provides real-time data migration, so you can have analysis-ready data always.
  • 100% Complete & Accurate Data Transfer: Hevo’s robust infrastructure ensures reliable data transfer with zero data loss.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (including 40+ free sources) that can help you scale your data infrastructure as required.
  • 24/7 Live Support: The Hevo team is available round the clock to extend exceptional support to you through chat, email, and support calls.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Live Monitoring: Hevo allows you to monitor the data flow so you can check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!


Initially, BigQuery did not support anything other than the SELECT statement. However, a lot of improvements have been made to this Data Warehousing Solution to make it support additional statements. 

BigQuery now supports the BigQuery TRUNCATE TABLE command, which is good for those coming from an on-premise background. The statement is used to delete data from a BigQuery table. When used, the BigQuery TRUNCATE TABLE command removes all data from a table but leaves the table’s metadata intact, even the table schema, labels, and description. 

When you use BigQuery’s DELETE DML statement to delete data from a table, you will incur a scan cost. The good news is that the BigQuery TRUNCATE TABLE statement falls under the category of BigQuery free operations, so you don’t incur any costs when using this statement. Thus, if you need to reduce scan costs, use the BigQuery TRUNCATE TABLE command over DELETE when you want to delete all table contents. 

BigQuery TRUNCATE TABLE Command Syntax

The BigQuery TRUNCATE TABLE command takes the following syntax:

TRUNCATE TABLE [[project_name.]dataset_name.]table_name

The parameters in the above syntax are described below:

  • project_name: This is the name of the project that contains the table. The project that runs this DDL (Data Definition Language) query is used as the default. 
  • dataset_name: This is the name of the dataset that contains the table. 
  • table_name: This is the name of the table that is to be truncated. 

Note: TRUNCATE TABLE BigQuery command does not allow you to truncate models, views, materialized views, and external tables. However, the command allows you to apply Quotas and LIMITs to your statements. 

If you are using a partitioned table and the table requires a partition filter, the BigQuery TRUNCATE TABLE command will fail. To truncate the partitioned table, you should first update the table so as to remove the partition filter requirement, then execute the BigQuery TRUNCATE TABLE command. The command will then run successfully. 

BigQuery TRUNCATE TABLE Example Queries

In this section, we will be giving practical examples of how to use the BigQuery TRUNCATE TABLE command. First, let’s create the table we will use to demonstrate this. 

We will create a table named customer that stores data about the details of customers. We will use the SQL `CREATE TABLE` command for this as shown below:

CREATE TABLE `project.dataset.customer` ( 
    id INT64,
    name STRING,
    country STRING,

The table will store details like customer id, name, country, and the time they made a purchase. 

Now that the table is ready, let us insert some data into it. We will use the SQL INSERT statement for this as shown below:

INSERT `project.dataset.customer` (id, name, country, ts)
VALUES(2347, 'John Doe', 'Scotland', TIMESTAMP("2021-12-01"));

The above statement will insert one row into the table. Let us insert 3 other rows into the table:

INSERT `project.dataset.customer` (customer_id, name, location, ts)
      (23348, 'Mercy Michaels',  'Netherlands',  TIMESTAMP("2021-12-01")),
      (23349, 'Joel Pogba',  'USA',  TIMESTAMP("2021-12-03")),
      (23350, 'Kate Drogba',  'Nigeria',  TIMESTAMP("2021-12-05"));

We now have 4 rows in the table. 

To delete the table data, we can use the BigQuery TRUNCATE TABLE statement. This is demonstrated below:

TRUNCATE TABLE `project.dataset.customer`

The command will delete all rows from the table. 

If you would like to learn more on BigQuery Alter Table Command, check out our other article here- BigQuery Alter Table Command: Key Types, Syntax & Example Queries Simplified 101.


The BigQuery TRUNCATE TABLE command is an essential operation in BigQuery to reduce query scan costs. The BigQuery TRUNCATE TABLE statement removes all rows from a table but leaves the table metadata intact, including the table schema, description, and labels. We recommend using TRUNCATE over DELETE whenever you need to delete the entire contents of a table.

Being a Google BigQuery Data Warehouse user, your organization might be dealing with megabytes or petabytes of data, and when it comes to analysis, these processes can get tough. Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of desired destinations such as Google BigQuery, with a few clicks.

Hevo Data with its strong integration with 100+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to Google BigQuery, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

Visit our Website to Explore Hevo

Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the pricing, which will assist you in selecting the best plan for your requirements.

Share with us your experience of understanding the BigQuery TRUNCATE TABLE statement in the comment section below! We would love to hear your thoughts.

Nicholas Samuel
Freelance Technical Content Writer, Hevo Data

Skilled in freelance writing within the data industry, Nicholas is passionate about unraveling the complexities of data integration and data analysis through informative content for those delving deeper into these subjects. He has written more than 150+ blogs on databases, processes, and tutorials that help data practitioners solve their day-to-day problems.

No-code Data Pipeline for BigQuery