Google BigQuery is a cloud-based enterprise Data Warehouse that allows users to run SQL queries quickly and analyze large datasets interactively. BigQuery is a read-only data processing engine based on Google’s Dremel Technology.

Zendesk makes it possible for customer interactions from messaging, phone, chat, email, social media, and any other channel you can think of to all come together in one place.

This article talks about how to Connect Bigquery to Zendesk in a few simple steps. In addition to that, it also describes Zendesk and Bigquery briefly.

What is Google BigQuery?

Google BigQuery is a highly scalable, serverless, and multi-cloud Data Warehouse that uses a build-in query engine. It is a highly scalable serverless, fully-featured, fully manageable Data Warehouse that enables scalable analysis over petabytes of data. It is developed by Google and launched on 19th May 2010.

BigQuery is also called SQL-based Data Warehouse as a Service (DWaaS) with zero infrastructure management. Google provides a complete package to their users with Big Data loading features on Google Cloud Storage and connectivity with various Google apps like Apps Script. Google BigQuery has many built-in features like Machine Learning and AI Capabilities, Geospatial Analysis, and Business Intelligence. 

BigQuery uses a Columnar Storage format that is optimized for analytical queries to store data. BigQuery displays data in tables, rows, and columns, with full database transaction semantics support (ACID).

Key Features of Google BigQuery

  • BigQuery is a serverless warehouse with zero infrastructure management.
  • BigQuery offers infinite storage with high performance in analyzing petabyte-sized datasets in seconds. 
  • Query languages like SQL allow users to query data through BigQuery.
  • BigQuery can analyze Big Data across multiple cloud platforms with BigQuery Omni.
  • BigQuery offers built-in Machine Learning and Artificial Intelligence capabilities.
  • Real-time analytics is possible with BigQuery. It integrates real-time data quickly and efficiently and makes analysis available for business.
  • Data Visualization is available in a variety of graphic formats.
  • BigQuery provides information about location and mapping using BigQuery Geographic Information Systems (GIS).

What is Zendesk?

Zendesk is a cloud-based help desk management system that provides customizable tools to create online communities, knowledge bases, and customer service portals. The solution provides a front-end portal that can be customized, live chat capabilities, and integration with programs like Salesforce and Google Analytics. Many different vertical markets, from small to large, use Zendesk, including technology, government, media, and retail.

The online customer portal for Zendesk assists support staff in keeping track of opened tickets and their status. Customers can search existing tickets for answers to questions similar to their own, and if they’re still unsatisfied, they can create new tickets in the portal. Support pages can be customized by Zendesk with company logos, themes, and brand images.

Key Features of Zendesk

Some of the Key Features of Zendesk CRM Software are as follows:

  • Internal Desk Help: Internal Help Desk Software from Zendesk helps you manage employee interactions and respond to queries and support requests quickly. It also includes tools to assist your staff in becoming more productive by increasing morale and decreasing costly downtime.
  • Knowledge Base: Zendesk’s Knowledge Base has a wealth of valuable articles that address frequently asked topics. Users can also limit the contents of the Knowledge Base to a particular client group. It is also possible to translate business blog content into other languages to provide consumers with a more personalized experience.
  • Analytics and Reporting: Zendesk comes with an Analytics tracking system and Reporting module that analyses data on Agent Performance, Tickets, and Customer Satisfaction in real-time. Users can alter the data using the filters and responsive Dashboard to come up with fresh conclusions. 
  • Triggers: Zendesk comes with pre-configured triggers that users may use or change, as well as the ability to construct unique custom triggers from scratch. The triggers can start a workflow based on particular actions or modifications made to a ticket.
  • Agent Collision Detection: This option allows businesses to check whether any other agents are looking at the same ticket at the same time. This indicator becomes active when agents start working on an issue or ticket. 

Benefits of connecting BigQuery to Zendesk

Operating a high-touch customer support model with current knowledge of user actions is made easier by connecting BigQuery to Zendesk. To enhance user attributes, it also syncs high-quality, modeled data from your warehouse. Data for customers and organizations are always in sync and up-to-date thanks to this integration.

BigQuery to Zendesk ELT Integration

The main method to integrate BigQuery to Zendesk is:

Using CSV Files to Manually Connect BigQuery to Zendesk

To connect Bigquery to Zendesk manually, you first connect BigQuery to CSV and then CSV TO Zendesk.

BigQuery to CSV 

You can connect BigQuery to CSV using different methods. They are:

  • Using Cloud Console
  • Using bq Extract Command
  • Using Extract Job via Client Libraries(APIs)
Using Cloud Console

This is the easiest way to Export BigQuery Table to CSV as it does not require any technical knowledge.  Follow the simple steps below to effortlessly Export BigQuery Table to CSV:

  • Step 1: Go to the Google Cloud Console in BigQuery. 
  • Step 2: Navigate to the Explorer panel and select the desired table from your project.
  • Step 3: From the details panel, click on the Export option and select Export to Cloud Storage.
Export table to GCS
  • Step 4: An Export table to Google Cloud Storage dialog box will pop on your screen. Select the desired destination bucket, folder, or file for your Table Data to be stored as CSV. 
  • Step 5: From the Drop Menu in Export Format, select CSV. Also, Select None for no Compression or GZIP compression which is supported by CSV.
  • Step 6: Click on the Export button to complete the Export BigQuery Table to CSV process.
Using bq Extract Command

Using the bq command-line tool, you can easily Export Bigquery Table to CSV by following the sample code given below:  

bq --location=location extract 
--destination_format format 
--compression compression_type 
--field_delimiter delimiter 

The above code snippet has the following parameters:

  • Location: The location of your BigQuery data. For example, if you are in the USA, the location is ’US.’
  • Destination Format: The format of the file you need in Google Cloud Storage(GCS).
  • Compression Type: The specific compression type supported by the file format.
  • Delimiter: For CSV, the tab is used to indicate the boundary between columns.
  • Boolean: The default value is true allowing header rows to be printed to the exported data.  
  • Project_ID, Dataset, Table: Details of the table you are exporting your data from.
  • Bucket, Filename.ext: The storage location of your file in GCS and filename with the desired format.

Given below is an example for exporting ‘maindataset.mytable’ in CSV filename onefile.csv with GZIP compression in a GCS bucket named bucket-one.

bq extract 
--compression GZIP 
gs:// bucket-one/onefile.csv
Using Extract Job via Client Libraries (APIs)

It is also possible to Export BigQuery Table to CSV format using various programming environments such as C#, Go, Java, Node.js, PHP, Python, and Ruby. To begin, you need to install the Client Libraries and then start writing queries. For instance, you can use the following sample code in Python to Export BigQuery Table to CSV:

from import bigquery
client = bigquery.Client()
bucket_name = 'bucket-one'
project = "project-one"
dataset_id = "one"
table_id = "onefile"

destination_uri = "gs://{}/{}".format(bucket_name, "onefile.csv")
dataset_ref = bigquery.DatasetReference(project, dataset_id)
table_ref = dataset_ref.table(table_id)

extract_job = client.extract_table(
    # Location must match that of the source table.
)  # API request
extract_job.result()  # Waits for job to complete.

    "Exported {}:{}.{} to {}".format(project, dataset_id, table_id, destination_uri)

CSV to Zendesk

To migrate CSV to Zendesk follow the following steps:

  • Your data must first be organized into the subsequent CSV files:
    • CSV file with tickets;
    • CSV file with ticket comments;
    • CSV file with Knowledge Base
  • Please keep in mind the following specifications as you prepare your files:
    • Each file shouldn’t be larger than 1GB.
    • There must be a heading row in every file.
    • The delimiter should be a comma.
    • Give each attachment a publicly accessible URL so that it can be imported. Use commas to divide multiple attachments before migrating them.
    • If your data spans multiple lines, use double-quote marks (“) in your CSV file. For instance, tags, attachments, and descriptions
    • Heading row content must be written in English. Your planned import of data may be in any language.
    • The first letter in a header must be capitalized.
Prepare the Tickets.csv File

There should be a heading row with the following columns in the file containing the tickets:

  • TicketId: required;
  • Subject: required;
  • Tags: optional;
  • DepartmentName: optional;
  • Description: required;
  • Type: required;
  • Status: required;
  • Priority: required;
  • AgentEmail: required;
  • AgentName: required;
  • ContactEmail: required;
  • ContactName: required;
  • ContactPhone: optional;
  • CompanyName: optional;
  • CreatedTime: required. Use YYYY-MM-DD HH:MM:SS time format;
  • UpdatedTime: required. Use YYYY-MM-DD HH:MM:SS time format;
  • ClosedTime: optional. Use YYYY-MM-DD HH:MM:SS time format;
  • Attachments: optional.
Importing Custom Fields

By including a tag in the column header, you can migrate custom fields. The tags you should use are as follows:

  • _select – dropdown, e.g., product_select;
  • _date – time, e.g., due_date;
  • _integer – number, e.g., TicketInternal_integer.

The _textArea tag can be used to migrate different kinds of custom fields.

Importing Ticket Comments

You must include your ticket comments in a separate file that has the following headers to import them with your tickets:

  • CommentId – optional;
  • TicketId – required;
  • AuthorEmail – required;
  • Public – required (use 0 for private and 1 for public comments);
  • Body – required;
  • CreatedTime – required. Use YYYY-MM-DD HH:MM:SS time format;
  • Attachments – optional. 

If a ticket has multiple comments, group them as follows:

Group Comments Image
Import Files to Zendesk

When your files are prepared, visit the Help Desk Migration website to start the import process.

Follow the steps below:

  • Step 1: Choose the CSV data source and add your files. The following choices are available to you:
CSV Data Sources

Use our sample to pass the source step if importing knowledge base articles is not necessary. Simply skip the Knowledge Base objects later when choosing which data to migrate.

  • Step 2: Enter the URL for the Zendesk account for your company after selecting Zendesk as the target platform.
  • Step 3: Map ticket fields and choose which records you want to migrate.
  • Step 4: Start the Free Demo Migration to see if the data migrated successfully and to receive a quote. Review the outcomes of your migration using this guide as a reference.
  • Step 5: Complete your data transfer if you are happy with the results.
  • Step 6: Close the tab and carry on with your day after beginning the migration of all records. Once the import is finished, it is emailed to you.
  • Step 7: However, kindly refrain from merging, editing, or deleting any of the imported data. You might alter data in this way and make irreversible changes. Wait until the migration is finished before working safely with your data.


This article explains the simple steps to connect BigQuery to Zendesk. It also gives an overview of Bigquery and Zendesk.

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Want to take Hevo for a spin? 

Get started for Free with Hevo! for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

Harshitha Balasankula
Marketing Content Analyst, Hevo Data

Harshitha is a dedicated data analysis fanatic with a strong passion for data, software architecture, and technical writing. Her commitment to advancing the field motivates her to produce comprehensive articles on a wide range of topics within the data industry.

Sync Data from Google BigQuery to Business Applications Seamlessly