Do you have enormous amounts of customer and marketing data in HubSpot? Are you looking to bring this data into a data warehouse such as BigQuery so that you leverage the power of SQL to analyze this data better? If yes, then you have landed on the right blog. 

This post talks about steps to move data from HubSpot to BigQuery. This blog post seeks to walk through the process of extracting data from HubSpot and exporting it to Google BigQuery using HubSpot’s REST API and also using Hevo.

Understanding HubSpot

HubSpot’s Software-as-a-Service has a wide range of offerings that allow a business to modernize the process of attracting visitors, converting leads, and closing deals. Here are some of HubSpot’s features:

  1. Marketing Hub – you can use it to attract more of the right people by creating and sharing useful content, writing tailored AD campaigns, and engaging with leads through personalized messages. 
  2. Sales Hub – teaches your sales team everything they need to sell in more personalized and relevant ways that are better and more efficient – not just for your sales reps, but your customers.
  3. Service Hub – helps you engage with your customers, guide them to solutions and turn them into promoters who help grow your company even more. This is because today, word of mouth is more important than ever. If your customers are happy they stay longer and refer their friends, but if not, they leave and tell the world. This, therefore, means that if you want to grow you need to invest in creating a great experience. 

HubSpot brings all these features together into a single shared view of all your customers. It empowers salespeople with the tools they need to generate leads, acquire, engage, and retain customers online.

Understanding Google BigQuery

Google BigQuery is a powerful cloud data warehouse platform that enables fast SQL queries against append-only tables using the processing power of Google’s infrastructure. By using Google BigQuery to store and analyze data points, businesses can now work more productively to enhance their customer engagement. When you integrate your HubSpot CRM data into BigQuery, immediately you’re able to tap into a rich set of big data analytics tools that can help you to unearth crucial lead and business insights.

Need to Move Data from HubSpot to BigQuery

There are broadly two reasons to move data from your HubSpot to BigQuery.

  1. Data Backup: You store valuable data on your customers on HubSpot. There is a possibility that you lose access to your data due to human error, canceled account subscriptions, and so on. It is therefore essential that you backup your CRM data in a Cloud Data Warehouse.
  2. Data Analytics: When you backup your HubSpot data to Google BigQuery tables, you get an infinitely scalable data store that includes cutting-edge data analytics tools. Using BigQuery, you can run custom SQL queries on your contact and campaign data and generate custom reports and dashboards. You can even combine this data with data from other data sources used by your business to make it even more valuable.

Note, in HubSpot, you can pretty much export everything:

  • Sales data
  • Tasks
  • Calls-to-Action (CTA) analytics
  • Contact lists
  • CRM data
  • Marketing emails
  • Customer feedback
  • Form submission data
  • Notes about each contact, calls, meetings
  • Activity data (clicks, opens, views, URL redirects, etc.)
Reliably integrate data with Hevo’s Fully Automated No Code Data Pipeline

If yours anything like the 1000+ data-driven companies that use Hevo, more than 70% of the business apps you use are SaaS applications Integrating the data from these sources in a timely way is crucial to fuel analytics and the decisions that are taken from it. But given how fast API endpoints etc can change, creating and managing these pipelines can be a soul-sucking exercise.

Hevo’s no-code data pipeline platform lets you connect over 150+ sources in a matter of minutes to deliver data in near real-time to your warehouse. What’s more, the in-built transformation capabilities and the intuitive UI means even non-engineers can set up pipelines and achieve analytics-ready data in minutes. 

All of this combined with transparent pricing and 24×7 support makes us the most loved data pipeline software in terms of user reviews.

Take our 14-day free trial to experience a better way to manage data pipelines.

Get started for Free with Hevo!


  1. A HubSpot account with admin access.
  2. A Google Cloud Platform account with billing enabled.
  3. Google Cloud SDK installed on your CLI.
  4. Configured the Google Cloud SDK to point to your Google Cloud project.
  5. Enabled the BigQuery API on your Google Cloud project.
  6. bigquery.admin permissions which give you the ability to load data into tables in your BigQuery dataset.

Methods to Move Data from HubSpot to BigQuery

Download the Cheatsheet on How to Set Up High-performance ETL to BigQuery
Download the Cheatsheet on How to Set Up High-performance ETL to BigQuery
Download the Cheatsheet on How to Set Up High-performance ETL to BigQuery
Learn the best practices and considerations for setting up high-performance ETL to BigQuery

Method 1: Manual Method to Move Data from HubSpot to BigQuery

1. Obtain your HubSpot API key

  • Navigate to the main navigation bar in your HubSpot account and click the Settings icon.
  • In the left sidebar menu, go to Integrations > API key.
  • Click Generate API key.
  • If you had previously generated an API key, click Show to display your API key.

  • Record this key.

2. Export HubSpot Data to JSON

API calls to HubSpot should be made to the base domain. JSON will be returned in all responses.

Get all your HubSpot Contacts:

curl --request GET 
  --url '' -o contacts_data.json


  "lists": [
      "dynamic": true,
      "metaData": {
        "processing": "DONE",
        "size": 161,
        "error": "",
        "lastProcessingStateChangeAt": 1284705947989,
        "lastSizeChangeAt": 1579479579246
      "name": "test emails",
      "filters": [
            "filterFamily": "PropertyValue",
            "withinTimeMode": "PAST",
            "checkPastVersions": false,
            "type": "string",
            "property": "email",
            "value": "test",
            "operator": "STR_STARTS_WITH"
      "portalId": 55748,
      "createdAt": 1396370538484,
      "listId": 1,
      "updatedAt": 1492585836314,
      "listType": "DYNAMIC",
      "internalListId": 330881,
      "deleteable": true
      "dynamic": false,
      "metaData": {
        "processing": "DONE",
        "size": 2,
        "error": "",
        "lastProcessingStateChangeAt": 1175279425836,
        "lastSizeChangeAt": 1196347964521
      "name": "staticTest",
      "filters": [
      "portalId": 77123,
      "createdAt": 1253682414252,
      "listId": 3,
      "updatedAt": 1358942467316,
      "listType": "STATIC",
      "internalListId": 5
  "offset": 2,
  "has-more": true

Here are some of the other API calls you might use:

  • Analytics API
  • Automation API
  • Companies API
  • Events API
  • Email Events API
  • Engagements API
  • Marketing API
  • Products API
  • Social Media API
  • etc.

3. Create a Dataset and a BigQuery table

From your gcloud command-line, run this command:

bq mk hubspot_dataset

4. Create an Empty Table in your hubspot_dataset.

bq mk 
--expiration 86400 
--description "Contacts table" 
--label organization:development 

The BigQuery API allows you to upload binary data using the following formats: AVRO, CSV, PARQUET, ORC, or NEWLINE_DELIMITED_JSON5. Loading data into your BigQuery table from your local machine.

The following command loads a newline-delimited JSON file contacts_data.json from your local machine into a table named contacts_table in the hubspot_dataset. You need to define the contacts schema in a local schema file named contacts_schema.json. Notice that since you’re loading files from your local machine, you need to specify the data format explicitly.

bq load 

Normally you will want to backup your HubSpot data at least daily or perhaps multiple times per day. You can load additional data into a BigQuery table either from source files in your local machine by running the following command.

5. Appending to a Table Using a Local File

    bq load 

This command loads data from a file named contacts_data.json and appends data to a table named contacts_table in hubspot_dataset. The schema is defined using a JSON schema file — contacts_schema.json.

6. Scheduling Recurring Load Jobs

  • Create your bash shell script.
 $ sudo mkdir /bin/scripts/ && touch /bin/scripts/

Add the following content to the file and save it.

#!/bin/bash     bq load      --autodetect      --replace      --source_format=NEWLINE_DELIMITED_JSON      hubspot_dataset.contacts_table      ./contacts_ data.json
  • From your CLI, edit crontab.
$ crontab -e
  • You will be prompted to edit a file to introduce tasks to be run by cron. Schedule cron to Execute a Job at 6 pm Daily.
0 18 * * * /bin/scripts/
  • Navigate to a directory where your file is located and make the file executable.
$ chmod +x /bin/scripts/
  • The above crontab commands will make cron execute the file 6 PM daily.

Limitations of the Manual Method to Move Data from HubSpot to BigQuery

  1. HubSpot’s APIs are subject to rate limiting. These limits are applied per account for custom integrations using API keys.
  2. Wildcards are not supported when you load files using this method. Each file must be loaded individually and this often becomes very inefficient and resource-intensive.
  3. CronJobs can’t alert you if your load jobs fail or never start.
  4. Pulling data from HubSpot’s public endpoints using API keys is subject to daily limits.
  5. You need to specify separate schemas for data extracted from each API endpoint on BigQuery.
  6. This method does not allow you to have access to data in real-time. If your use case demands you to bring data in real-time, then this method is not suitable.
  7. If you need to clean, transform, and enrich your data before loading it to the data warehouse, then you would need to write additional code to achieve that. 

Method 2: Using Hevo to Move Data from HubSpot to BigQuery

Hevo Data
Image Source: Self

Hevo is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.

It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. It allows you to focus on key business needs and perform insightful analysis using various BI tools such as Power BI, Tableau, etc. 

Hevo focuses on two simple steps to move your data from HubSpot to BigQuery

  • Configure Source: Connect Hevo Data with HubSpot by providing a unique name for your Pipeline, along with details about your authorized HubSpot account. You can also choose the historical sync duration for your HubSpot data.
Configuring HubSpot Source
Image Source: Self

Integrate Data: Complete HubSpot to BigQuery migration by providing your destination name, account name, region of your account, database username and password, database and schema name, and the Data Warehouse name.

Configure BigQuerys
Image Source: Self

Check Out What Makes Hevo Amazing:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.


You have learned about 2 methods you can use to move data from HubSpot to BigQuery. The manual process requires configurations and is demanding, so check out Hevo, which will do all the hard work for you in a simple intuitive process so that you can focus on what matters, the analysis of your data. You can also check out Hubspot to Google Data Studio Integration.

Visit our Website to Explore Hevo

Hevo is a No-code Data Pipeline and has awesome 100+ pre-built integrations that you can choose from. Hevo can help you integrate your data from numerous sources like HubSpot and load them into a destination to analyze real-time data with a BI tool such as Tableau. It helps transfer data from HubSpot to a destination of your choice for free. It will make your life easier and data migration hassle-free. It is user-friendly, reliable, and secure. Check out the pricing details here. Try Hevo by signing up for a 14-day free trial and see the difference! Experience the power of Hevo first hand. Watch this short overview video to get a sense of how Hevo works:

Sign Up for a 14-day free trial with Hevo to start moving data from HubSpot to BigQuery right now!

No-code Data Pipeline for BigQuery