As a data engineer, you hold all the cards to make data easily accessible to your business teams. Your team just requested an Adroll to BigQuery connection on priority. We know you don’t wanna keep your data scientists and business analysts waiting to get critical business insights. As the most direct approach, you can leverage Adroll APIs. Or, hunt for a no-code tool that fully automates & manages data integration for you while you focus on your core objectives.

Well, look no further. With this article, get a step-by-step guide to connecting Adroll to BigQuery effectively and quickly, delivering data to your marketing team. 

Replicate Data from Adroll to BigQuery Using APIs

To start replicating data from Adroll to BigQuery, you need to use one of the Adroll APIs depending on your needs. In the following example, you can Audience API: 

  • Step 1: Data in Adroll can be exported as JSON data with the help of API keys. Adroll provides the Audience API to retrieve the data. The following command is provided by Adroll for you. You will need to provide your authentication token: 
curl -H 'Authorization: Token YOUR_TOKEN' \

You will retrieve the following JSON response after hitting the endpoint in curl:

  "results": [
      "advertisable_eid": "MY_ADVERTISABLE_EID",
      "items_count": 10000,
      "name": "Top 10K",
      "scoring_at": "2019-07-01T16:34:27.448382+00:00",
      "scoring_auc": 0.99100798368454,
      "scoring_by_user_eid": "USER_EID",
      "scoring_filename": "top-1k-domains.csv",
      "scoring_grades": {
        "A": {
          "threshold": 30,
          "items_count": 468,
          "min_item": {
            "domain": "",
            "score": 0.999897
        "C": {
          "threshold": 85,
          "items_count": 365,
          "min_item": {
            "domain": "",
            "score": 0.939022
        "B": {
          "threshold": 80,
          "items_count": 2057,
          "min_item": {
            "domain": "",
            "score": 0.973098
        "D": {
          "threshold": 95,
          "items_count": 2696,
          "min_item": {
            "domain": "",
            "score": 0.029493
        "F": {
          "threshold": 100,
          "items_count": 2562,
          "min_item": null
        "unscored_count": 1852,
        "scored_count": 8148
      "scoring_holdout1_model_id": null,
      "scoring_holdout1_retries": 0,
      "scoring_holdout1_status": null,
      "scoring_holdout2_model_id": null,
      "scoring_holdout2_retries": 0,
      "scoring_holdout2_status": null,
      "scoring_holdout3_model_id": null,
      "scoring_holdout3_retries": 0,
      "scoring_holdout3_status": null,
      "scoring_holdout4_model_id": null,
      "scoring_holdout4_retries": 0,
      "scoring_holdout4_status": null,
      "scoring_holdout5_model_id": null,
      "scoring_holdout5_retries": 0,
      "scoring_holdout5_status": null,
      "scoring_items_count": 862,
      "scoring_items_pending": true,
      "scoring_model_id": "f3f3e5cd-d606-4033-93cd-10c1af2cc232",
      "scoring_production_model_id": null,
      "scoring_production_retries": 0,
      "scoring_production_status": null,
      "scoring_status": "complete",
      "sfdc_company_list_name": null,
      "sfdc_company_list_object_id": null,
      "sfdc_create_accounts": false,
      "sfdc_initial_pull_pending": null,
      "sfdc_scoring_company_list_name": null,
      "sfdc_scoring_company_list_object_id": null,
      "sfdc_sync_state": "none",
      "sfdc_synced_at": null,
      "sfdc_train_pending": null,
      "suggestions_count": 2955,
      "tiers": [
          "eid": "all",
          "items_count": 10000,
          "tal_eid": "TARGET_ACCOUNT_LIST_EID"
          "items_count": 9998,
          "eid": "untiered",
          "name": null,
          "tal_eid": "TARGET_ACCOUNT_LIST_EID"
          "items_count": 2,
          "eid": "TARGET_ACCOUNT_GROUP_EID",
          "name": "UCLA",
          "tal_eid": "TARGET_ACCOUNT_LIST_EID"
      "updated_at": "2019-09-16T13:28:43.678412+00:00",
      "updated_by_user_eid": null

You need to store the response as JSON file.

  • Step 2: In this step you need to import JSON files in BigQuery. You need to click the “Create Table” option and then click “Create Table from” from BigQuery’s web console. Now you need to specify JSON as your source type and give your table a name and choose a database. To specify schematics, you may either provide a sample JSON schema or select “auto-detect” in the schema specifications. Multiple API, the field delimiter, skip header rows, the amount of permissible errors, jagged rows and other variables are additional adjustable settings. 

With the table first, you may now retrieve your JSON, choose the table’s schema, build the table (by using the commands) and add data in JSON format to it. 

This process is a great way to replicate data from Adroll to BigQuery effectively. It is optimal for the following scenarios:

  • APIs can be programmed as customized scripts that can be deployed with detailed instructions on completing each workflow stage.
  • Data workflows can be automated with APIs, like Audience APIs in this scenario. These scripts can be reused by anyone for repetitive processes.

Using the Adroll APIs might be cumbersome and not a wise choice in the following scenarios,:

  • Using this method requires you to make API calls and code custom workflows. Hence it requires strong technical knowledge. 
  • Updating the existing API calls and managing workflows requires immense engineering bandwidth and hence can be a pain point for many users. Maintaining APIs is costly in terms of development, support, and updating.

When the frequency of replicating data from Adroll increases, this process becomes highly monotonous. It adds to your misery when you have to transform the raw data every single time. With the increase in data sources, you would have to spend a significant portion of your engineering bandwidth creating new data connectors. Just imagine — building custom connectors for each source, transforming & processing the data, tracking the data flow individually, and fixing issues. Doesn’t it sound exhausting?

How about you focus on more productive tasks than repeatedly writing custom ETL scripts? This sounds good, right?

In these cases, you can… 

Automate the Data Replication process using a No-Code Tool

You can use automated pipelines to avoid such challenges. Here are the benefits of leveraging a no-code tool:

  • Automated pipelines allow you to focus on core engineering objectives while your business teams can directly work on reporting without any delays or data dependency on you.
  • Automated pipelines provide a beginner-friendly UI that saves the engineering teams’ bandwidth from tedious data preparation tasks.
Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

For instance, here’s how Hevo Data, a cloud-based ETL tool, makes Adroll to BigQuery data replication ridiculously easy:

Step 1: Configure Adroll as a Source

Authenticate and configure your Adroll Source.

Image Source

Step 2: Configure BigQuery as a Destination

In the next step, we will configure BigQuery as the destination.

Image Source

Step 3: All Done to Setup Your ETL Pipeline

Once your Adroll to BigQuery ETL Pipeline is configured, Hevo will collect new and updated data from Adroll every five minutes (the default pipeline frequency) and duplicate it into BigQuery . Depending on your needs, you can adjust the pipeline frequency from 5 minutes to an hour.

Data Replication Frequency

Default Pipeline FrequencyMinimum Pipeline FrequencyMaximum Pipeline FrequencyCustom Frequency Range (Hrs)
1 Hr15 Mins24 Hrs1-24

In a matter of minutes, you can complete this No-Code & automated approach of connecting Adroll to BigQuery using Hevo Data and start analyzing your data.

Hevo Data offers 150+ plug-and-play connectors(including 40+ free sources). It efficiently replicates your data from Adroll to BigQuery, databases, data warehouses, or a destination of your choice in a completely hassle-free & automated manner. Hevo Data’s fault-tolerant architecture ensures that the data is handled securely and consistently with zero data loss. It also enriches the data and transforms it into an analysis-ready form without having to write a single line of code.

Hevo Data’s reliable data pipeline platform enables you to set up zero-code and zero-maintenance data pipelines that just work. Here’s what allows Hevo Data to stand out in the marketplace:

  • Fully Managed: You don’t need to dedicate time to building your pipelines. With Hevo Data’s dashboard, you can monitor all the processes in your pipeline, thus giving you complete control over it.
  • Data Transformation: Hevo Data provides a simple interface to cleanse, modify, and transform your data through drag-and-drop features and Python scripts. It can accommodate multiple use cases with its pre-load and post-load transformation capabilities.
  • Faster Insight Generation: Hevo Data offers near real-time data replication, so you have access to real-time insight generation and faster decision-making. 
  • Schema Management: With Hevo Data’s auto schema mapping feature, all your mappings will be automatically detected and managed to the destination schema.
  • Scalable Infrastructure: With the increase in the number of sources and volume of data, Hevo Data can automatically scale horizontally, handling millions of records per minute with minimal latency.
  • Transparent pricing: You can select your pricing plan based on your requirements. Different plans are clearly put together on its website, along with all the features it supports. You can adjust your credit limits and spend notifications for any increased data flow.
  • Live Support: The support team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.

Take our 14-day free trial to experience a better way to manage data pipelines.

Get started for Free with Hevo!

What Can You Achieve by Migrating Your Data from Adroll to BigQuery?

Here’s a little something for the data analyst on your team. We’ve mentioned a few core insights you could get by replicating data from Adroll to BigQuery, Does your use case make the list?

  • Know your customer: Get a unified view of your customer journey by combing data from all your channels and user touchpoints. Easily visualize each stage of your marketing & sales funnel and quickly derive actionable insights.   
  • Supercharge your ROAS: Find your high ROAS creatives on which you should be spending more money, thereby boosting your conversions. Identify the different creatives and copy that work best for your customer segment. 
  • Analyze Customer LTV: Get a competitive edge with near-real-time data from all your marketing channels and understand how different targeting, creatives, or products impact your customer LTV. 

Learn more about: AdRoll Shopify Integration

Summing It Up

Adroll APIs is the right path for you when your team needs data from Adroll once in a while. However, a custom ETL solution becomes necessary for the increasing data demands of your product or marketing channel. You can free your engineering bandwidth from these repetitive & resource-intensive tasks by selecting Hevo Data’s 150+ plug-and-play integrations.

Visit our Website to Explore Hevo

Saving countless hours of manual data cleaning & standardizing, Hevo Data’s pre-load data transformations get it done in minutes via a simple drag n drop interface or your custom python scripts. No need to go to your data warehouse for post-load transformations. You can simply run complex SQL transformations from the comfort of Hevo Data’s interface and get your data in the final analysis-ready form. 

Want to take Hevo for a ride? Sign Up for a 14-day free trial and simplify your data integration process. Check out the pricing details to understand which plan fulfills all your business needs.

Share your experience of replicating data from Adroll to BigQuery! Let us know in the comments section below!

Harsh Varshney
Research Analyst, Hevo Data

Harsh is a data enthusiast with over 2.5 years of experience in research analysis and software development. He is passionate about translating complex technical concepts into clear and engaging content. His expertise in data integration and infrastructure shines through his 100+ published articles, helping data practitioners solve challenges related to data engineering.

No-code Data Pipeline For BigQuery