Easily move your data from Hubspot To BigQuery to enhance your analytics capabilities. With Hevo’s intuitive pipeline setup, data flows in real-time—check out our 1-minute demo below to see the seamless integration in action!
Need a better way to handle all that customer and marketing data in HubSpot. Transfer it to BigQuery. Simple! Want to know how?
This article will explain how you can transfer your HubSpot data into Google BigQuery through various means, be it HubSpot’s API or an automated ETL tool like Hevo Data, which does it effectively and efficiently, ensuring the process runs smoothly.
What is HubSpot?
HubSpot is an excellent cloud-based platform for blending different business functions like sales, marketing, support, etc. It features five different hubs: service, Operations, CRM, Marketing, and CMS. The marketing hub is used for campaign automation and lead generation, while the sales hub assists in automating sales pipelines, giving an overview of all contacts at a glance. It’s also an excellent way to include a knowledge base, generate feedback from the consumer, and construct interactive support pages.
What is BigQuery?
Google BigQuery is a fully managed and serverless enterprise cloud data warehouse. It uses Dremel technology, which transforms SQL queries into tree structures. BigQuery provides an outstanding query performance owing to its column-based storage system. BigQuery offers multiple features—one is the built-in BigQuery Data Transfer Service, which moves data automatically, while another is BigQuery ML, which runs machine learning models. BigQuery GIS enables geospatial analysis, while the fast query processing is enabled by BigQuery BI Engine, rendering it a powerful tool for any data analysis task.
Sync your HubSpot Data to BigQuery in 2 Steps
No credit card required
Use Cases to transfer your data from HubSpot to BigQuery
- Moving HubSpot data to BigQuery creates a single source of truth that aggregates information to deliver accurate analysis. Therefore, you can promptly understand customers’ behavior and improve your decision-making concerning business operations.
- BigQuery can manage huge amounts of data with ease. If there is a need for your business expansion and the production of data increases, BigQuery will be there, making it easy for you.
- BigQuery, built on Google Cloud, has robust security features like auditing, access controls, and data encryption. User data is kept securely and compliant with the rules, thus making it safe for you.
- BigQuery’s flexible pricing model can lead to major cost savings compared to having an on-premise data warehouse you pay to maintain.
What Type of Data can you Transfer from Hubspot to BigQuery?
Before you migrate your data to BigQuery, you should ensure that it is in a format that is supported by BigQuery. Currently, two data formats are supported:
It would help if you also made sure that the data types you are using are the ones supported by BigQuery, which are the following:
- STRING
- INTEGER
- FLOAT
- BOOLEAN
- RECORD
- TIMESTAMP
Here’s a list of the data that you can move from HubSpot to BigQuery:
- Activity data (clicks, views, opens, URL redirects, etc.)
- Calls-to-action (CTA) analytics
- Contact lists
- CRM data
- Customer feedback
- Form submission data
- Marketing emails
- Sales data
Prerequisites
When moving your HubSpot data to BigQuery manually, make sure you have the following set up:
- An account with billing enabled on Google Cloud Platform.
- Admin access to a HubSpot account.
- You have the Google Cloud CLI installed.
- Connect your Google Cloud project to the Google Cloud SDK.
- Activate the Google Cloud BigQuery API.
- Make sure you have BigQuery Admin permissions before loading data into BigQuery.
These steps ensure you’re all set for a smooth data migration process!
Easily transfer your HubSpot data to BigQuery with Hevo’s intuitive platform. Our no-code solution makes data integration and migration seamless and efficient.
Why choose Hevo?
- Auto-Schema Mapping: Automatically handle schema mapping for a smooth data transfer.
- Flexible Transformations: Utilize drag-and-drop tools or custom Python scripts to prepare your data.
- Real-Time Sync: Ensure your BigQuery warehouse is always up-to-date with real-time data ingestion.
Join over 2000 satisfied customers, including companies like Freight Tiger and Space Next Door, who trust Hevo for their data integration needs. Check out why Hevo is rated 4.7 stars on Capterra.
Get Started with Hevo for Free
Methods to move data from HubSpot to BigQuery
Method 1: The Best Way of Loading Data from HubSpot to BigQuery: Using Hevo Data
You can easily opt for SaaS alternatives such as Hevo Data to avoid the hassle of going through the manual setup. You can configure Hevo Data to transfer your data from HubSpot BigQuery integration in three easy steps.
Step1: Setup HubSpot as a Source Connector
- To connect your HubSpot account as a source in Hevo, search for HubSpot.
- Configure your HubSpot Source.
- Give your pipeline a name, configure your HubSpot API Version, and mention how much Historical Sync Duration you want, such as for the past three months, six months, etc. You can also choose to load all of your Historical data.
For example, I will select three months and then click on Continue.
Next, your objects will be fetched, and you can select them per your requirements. By default, all of your objects are selected. However, you can choose your objects accordingly. For example, I will select only my contacts. You can also search for your objects by clicking the panel’s Search icon at the top-right-hand side and then clicking Continue.
Step2: Setup BigQuery as Destination Connector
- Select BigQuery as your destination.
- Configure your destination by giving a Destination Name, selecting your type of account, i.e., User Account or Service Account, and mentioning your Project ID. Then click on Save & Continue.
NOTE: As the last step, you can add a Destination Table Prefix, which will be reflected on your destination. For example, if you put ‘hs,’ all the tables loaded into your BigQuery from HubSpot will have ‘hs_original-table-name.’ If you have JSON files, manually flattening your files is a tedious process; thus, Hevo Data provides you with two options: JSON fields as JSON strings and array fields to strings, while the other is collapsing nested arrays into strings. You can select either one of those and click on Continue.
Once you’re done, your HubSpot data will be loaded into Google BigQuery.
Step 3: Sync your HubSpot Data to BigQuery
In the pipeline lifecycle, you can observe your source being connected, data being ingested, prepared for loading into BigQuery, and finally, the actual loading of your HubSpot data.
As you can see above, our HubSpot has now been connected to BigQuery. Once all events have loaded, your final page will resemble this. It is much easier to adjust your loads or ingestion schedule using our interface. You can also include any object for historical load after creating your pipeline. You can also include objects for ingestion only. Moreover, on the same platform, you can perform additional alterations to your data, such as changing schemas and carrying out ad-hoc analyses immediately after data loads. Our excellent support team is on standby for any queries you may have.
Method 2: How to move data from HubSpot to BigQuery Using HubSpot Private App
Step 1: Creating a Private App
1. a) Go to the Settings of your HubSpot account and select Integrations → Private Apps. Click on Create a Private App.
1. b) On the Basic Info page, provide basic app details:
- Enter your app name or click on Generate a new random name
- You can also upload a logo by hovering over the logo placeholder, or by default, the initials of your private app name will be your logo.
- Enter the description of your app, or leave it empty as you wish. However, it is best practice to provide an apt description.
1. c) Click on the Scopes tab beside the Basic Info button. You can configure Read, Write, or give permissions for both.
Suppose I want to transfer only the contact information stored on my HubSpot data into BigQuery. I will select only Read configurations, as shown in the attached screenshot.
Note: If you access some sensitive data, it will also showcase a warning message, as shown below.
1. d) Once configuring your permissions, click the Create App button at the top right.
1. e) After selecting the Continue Creating button, a prompt screen with your Access token will appear.
Once you click on Show Token, you can Copy your token.
Note: Keep your access token handy; we will require that for the next step. Your Client Secret is not needed.
Migrate your Data from HubSpot to BigQuery
Migrate your Data from Google Ads to BigQuery
Migrate your Data from Google Analytics 4 to BigQuery
Step 2: Making API Calls with your Access Token
- Open up your command line and type in:
curl --request GET --url https://api.hubapi.com/contacts/v1/lists/all/contacts/all --header "Authorization: Bearer (Your_Token)" --header "Content-Type: application/json"
Just replace (Your_Token)
with your actual access token id.
- Here’s what the response will look like:
{
"contacts": [
{
"vid": 33068263516,
"canonical-vid":33068263516,
"merged-vids":[],
"portal-id":46584864,
"is-contact":true,
"properties":
{
"firstname":{"value":"Sam from Ahrefs"},
"lastmodifieddate":{"value":"1719312534041"}
},
},
NOTE: If you prefer not to use the curl command, use JavaScript. To get all the contacts created in your HubSpot account with Node.js and Axios, your request will look like this:
axios.get('https://api.hubapi.com/crm/v3/objects/contacts', {
headers: {
'Authorization': `Bearer ${YOUR_TOKEN}`,
'Content-Type': 'application/json'
}
})
.then((response) => {
// Handle the API response
})
.catch((error) => {
console.error(error);
});
Remember, the private app access tokens are implemented on OAuth. You can also authenticate calls using any HubSpot client library. For instance, with the Node.js client library, you pass your app’s access token like this:
const hubspotClient = new hubspot.Client({ accessToken: YOUR_TOKEN });
Step 3: Create a BigQuery Dataset
- From your Google Cloud command line, run this command:
bq mk hubspot_dataset
hubspot_dataset
is just a name that I have chosen. You can change it accordingly. The changes will automatically be reflected in your Google Cloud console. Also, a message “Dataset ‘united-axle-389521:hubspot_dataset’ successfully created.” will be displayed in your CLI.
NOTE: Instead of using the Google command line, you can also create a dataset from the console. Just hover over View Actions on your project ID. Once you click it, you will see a Create Dataset option.
Step4: Create an Empty Table
- Run the following command in your Google CLI:
bq mk
--table
--expiration 86400
--description "Contacts table"
--label organization:development
hubspot_dataset.contacts_table
- After your table is successfully created, a message “Table ‘united-axle-389521:hubspot_dataset.contacts_table’ successfully created” will be displayed. The changes will also be reflected in the cloud console.
NOTE: Alternatively, you can create a table from your BigQuery Console. Once your dataset has been created, click on View Actions and select Create Table.
After selecting Create Table, a new table overview page will appear on the right of your screen. You can create an Empty Table or Upload a table from your local machine, such as Drive, Google Cloud Storage, Google Bigtable, Amazon S3, or Azure Blob Storage.
Step 5: Adding Data to your Empty Table
Before you load any data into BigQuery, you’ll need to ensure it’s in a format that BigQuery supports. For example, if the API you’re pulling data from returns XML, you’ll need to transform it into a format BigQuery understands. Currently, these are the data formats supported by BigQuery:
- Avro
- JSON (newline delimited)
- CSV
- ORC
- Parquet
- Datastore exports
- Firestore exports
You also need to ensure that your data types are compatible with BigQuery. The supported data types include:
- ARRAY
- BOOLEAN
- BYTES
- DATE
- DATETIME
- GEOGRAPHY
- INTERVAL
- JSON
- NUMERIC
- RANGE
- STRING
- STRUCT
- TIME
- TIMESTAMP
See the documentation’s “DataTypes” and “Introduction to loading data” pages for more details.
The bq load
command is your go-to for uploading data to your BigQuery dataset, defining schema, and providing data type information. You should run this command multiple times to load all your tables into BigQuery.
Here’s how you can load a newline-delimited JSON file contacts_data.json
from your local machine into the hubspot_dataset.contacts_table
:
bq load \
--source_format=NEWLINE_DELIMITED_JSON \
hubspot_dataset.contacts_table \
./contacts_data.json \
./contacts_schema.json
Since you’re loading files from your local machine, you must specify the data format explicitly. You can define the schema for your contacts in the local schema file contacts_schema.json.
Step 6: Scheduling Recurring Load Jobs
6. a) First, create a directory for your scripts and an empty backup script:
$ sudo mkdir /bin/scripts/ && touch /bin/scripts/backup.sh
6. b) Next, add the following content to the backup.sh file and save it:
#!/bin/bash
bq load --autodetect --replace --source_format=NEWLINE_DELIMITED_JSON hubspot_dataset.contacts_table ./contacts_data.json
6. c) Let’s edit the crontab to schedule this script. From your CLI, run:
$ crontab -e
6.d) You’ll be prompted to edit a file where you can schedule tasks. Add this line to schedule the job to run at 6 PM daily:
0 18 * * * /bin/scripts/backup.sh
6. e) Finally, navigate to the directory where your backup.sh file is located and make it executable:
$ chmod +x /bin/scripts/backup.sh
And there you go! These steps ensure that cron runs your backup.sh script daily at 6 PM, keeping your data in BigQuery up-to-date.
Limitations of the Manual Method to Move Data from HubSpot to BigQuery
- HubSpot APIs have a rate limit of 250,000 daily calls that resets every midnight.
- You can’t use wildcards, so you must load each file individually.
- CronJobs won’t alert you if something goes wrong.
- You need to set up separate schemas for each API endpoint in BigQuery.
- Not ideal for real-time data needs.
- Extra code is needed for data cleaning and transformation.
What are some of the reasons for using Hevo Data?
- Exceptional Security: It’s fault-tolerant architecture guarantees that no information or data will be lost, so you need not worry.
- Scalability: Hevo Data for scale is developed to be scaled out at a fraction of the cost with almost zero delay, making it suitable for contemporary extensive data requirements.
- Built-in Connectors: Hevo Data has more than 150 connectors, including HubSpot as a source and Google BigQuery as a destination, databases, and SaaS platforms; it even has a built-in webhook and RESTful API connector designed specifically for custom sources.
- Incremental Data Load: It utilizes bandwidth efficiently by only transferring modified data in real time.
- Auto Schema Mapping: Hevo Data manages schema automatically by detecting incoming data format and copying it to the destination schema. You can select between full and incremental mappings according to your data replication needs.
- Easy to use: Hevo Data offers a no-code ETL or ELT load pipeline platform.
Streamline your digital marketing by integrating HubSpot and Google Ads. Discover the two simple steps to connect these powerful tools.
Conclusion
HubSpot is a key part of many businesses’ tech stack, enhancing customer relationships and communication strategies—your business growth potential skyrockets when you combine HubSpot data with other sources. Moving your data lets you enjoy a single source of truth, which can significantly boost your business growth.
We’ve discussed two methods to move data—the manual process, which requires a lot of configuration and effort. Instead, try Hevo Data—it does all the heavy lifting for you with a simple, intuitive process. Hevo Data helps you integrate data from multiple sources like HubSpot and load it into BigQuery for real-time analysis. It’s user-friendly, reliable, and secure and makes data transfer hassle-free.
Sign up for a 14-day free trial with Hevo and connect Hubspot to BigQuery in minutes. Also, check out Hevo’s unbeatable pricing or get a custom quote for your requirements.
FAQs
Q1. How often can I sync my HubSpot data with BigQuery?
You can sync your HubSpot data with BigQuery as often as needed. With tools such as Hevo Data, you can set up real-time to keep your data up-to-date.
Q2. What are the costs associated with this integration?
The costs for integrating HubSpot with BigQuery depend on the tool you use and the amount of data you’re transferring. Hevo Data offers a flexible pricing model. Our prices can help you better understand. BigQuery costs are based on the amount of data stored and processed.
Q3. How secure is the data transfer process?
The data transfer process is highly secure. Hevo Data ensures data security with its fault-tolerant architecture, access controls, data encryption, and compliance with industry standards, ensuring your data is always protected throughout the transfer.
Q4. What support options are available if I encounter issues?
Hevo Data offers extensive support options, including detailed documentation, a dedicated support team through our Chat support available 24×5, and community forums. If you run into any issues, you can easily reach out for assistance to ensure a smooth data integration process.
Chirag is a seasoned support engineer with over 7 years of experience, including over 4 years at Hevo Data, where he's been pivotal in crafting core CX components. As a team leader, he has driven innovation through recruitment, training, process optimization, and collaboration with multiple technologies. His expertise in lean solutions and tech exploration has enabled him to tackle complex challenges and build successful services.