Asana to BigQuery: Move Data Instantly

on Data Integration, Tutorials • May 20th, 2020 • Write for Hevo

Are you looking for simple ways to move your data from Asana to BigQuery? Are you confused between building a custom data pipeline or using an automated solution to do this? Read on to find the best way for you to transfer data from Asana to BigQuery.

Moving data from Asana to Bigquery can help you to analyze, take necessary actions to meet the deadlines, analyze productivity, and forecast future plan. With the help of Google BigQuery’s petabyte storage and exceptional query analytics and machine learning platform, you can identify the pros and cons that would help to boost the efficiency and productivity of the resources quickly.

What is Asana?

Asana is a work management platform available as a cloud service that helps users to organize, track, and manage their tasks. Asana helps to plan and structure work in a way that suits organizations. All activities involved in a typical organization, right from project planning to task assignment, risk forecasting, assessing roadblocks, and changing plans can be handled on the same platform. All this comes at a very flexible pricing plan based on the number of users per month. There is also a free version available for teams up to a size of 15. Almost 70000 organizations worldwide use Asana for managing their work. 

What is BigQuery?

Google BigQuery is Google’s cloud-based enterprise data warehouse which is columnar in nature. You can run SQL queries to find meaningful insights through its web UI or you can use any other command-line tool as well.

Since Asana is closely coupled with the day-to-day activities of the organizations, it is only natural that the organizations will want to have the data imported into their databases or data warehouses for analysis and building insights. In this blog post, you will learn to load data from Asana to BigQuery – one of the most widely used completely managed data warehouse services.


You need to have the following prerequisites in place before going ahead:

  1. A GCP project and service account with BigQuery Editor and Cloud Run admin access.
  2. Asana API to get the data out of Asana.
  3. Basic understanding of Python and cron jobs.
Asana to BigQuery: Methods to move data

Method 1: Writing custom scripts to move data from Asana to BigQuery

In this method, you will write custom code and schedule it for intervals to move your data from Asana to Bigquery. To achieve data consistency and ensure no discrepancies arise, we will have to constantly monitor and invest in maintaining the infrastructure.

Method 2: Automatically moving data from Asana to BigQuery using Hevo

Hevo, a No-code Data Pipeline, can help you automate the process of data transfer from Asana to BigQuery without writing any code. You can utilize its plug-and-play platform to set the data moving in a few minutes with 100% accuracy and zero data loss.

Method 1: Writing custom scripts to move data from Asana to BigQuery

Here are the steps involved in moving data from Asana to Bigquery:

Step 1: Extract data from Asana

In this method, you need to write code for some or all of Asana’s APIs to extract data. If you are looking to get updated data on a periodic basis, make sure the code can fetch incremental data using a cron job. Store this data into Cloud Bucket. Ensure that the incremental files that are getting generated are uniquely identified.

Example of Asana API request:

  -H "Authorization: Bearer 0/a7f89e98g007e0s07da763a"

Example of Asana API JSON Response:

      "name":"Lahu Bhawar",

Step 2: Create a BigQuery table

Create tables and columns in BigQuery and map Asana’s JSON files to this schema. While doing this, ensure that there is data type compatibility between Asana and BigQuery.

Step 3: Create a View in BigQuery and load data

BigQuery is not designed for line by line updates or SQL “upsert” operations. To find the latest data, create a View on top of the data and fetch the latest data. Since Bigquery can handle Petabytes of data for analysis there would be no issue. Also, ensure that the View and tables are optimized as per BigQuery computing requirements. You can find more details for optimization in this link: Performance/Optimization

Step 4: Load data using BigQuery API

Cloud Storage Bucket is the cloud storage where the files to be loaded into BigQuery are stored. To insert the data from Cloud Storage Bucket to Bigquery, you can use BigQuery API. Following are the steps to do this:

  1. Upload the file to Google Cloud Storage Bucket using the below command.
gsutil cp source_path/file_name gs://target_path/

2. Using bq load utility, load the response file to BigQuery table.

bq load 
    --time_partitioning_field mytimestamp 

3. Ensure to create the table partitioned in BigQuery based on DateColumn and cluster the table based on Partition Column and Ticket Number.

Step 5: Monitor schema mapping

In addition to this, you need to make sure that there is constant monitoring to detect any change in Asana schema. You can use StackDriver Metric and alerting to detect any schema changes. You would need to modify and update the code if there is any change in the incoming data structure.

Cloud Run is a containerized way to deploy your code and run it using a cron job. Since Cloud Run is a serverless platform, we have to concentrate on our code instead of the data infrastructure.

Limitations of this approach

  1. Before working on this approach you need to be familiar with Cloud Build or Docker to deploy your code.
  2. You need to monitor the Asana APIs for any changes and modify your data table accordingly in case of any changes. You will also need to modify the code and redeploy it.
  3. You need to put a data validation step also to ensure that the custom scrip is moving data properly.
  4. You need an alerting mechanism using StackDriver Alert so that whenever the API call to load data fails, you get an alert and can debug.
  5. You will need dedicated engineering bandwidth to monitor this process and make any changes as required.

Method 2: Automatically moving data from Asana to BigQuery using Hevo

Hevo, a No-code Data Pipeline, is an easy solution to move data from Asana to BigQuery without having to go through any hassle of writing custom code. It can help you step over all the above limitations by loading data from Asana to BigQuery in two simple steps:

  1. Authenticate and connect Asana as a source
  2. Configure the BigQuery table where you want to load data and start moving data

Hevo’s fault-tolerant, dependable data integration platform will ensure that your data is securely moved from Asana to BigQuery in real-time.

The Hevo Advantage: 

  1. Code-free data loading: You need not invest any engineering resources to write and maintain any ETL scripts or cron jobs. There is no need for constant monitoring and maintenance of the data transfer process.
  2. Set up in minutes: Since Hevo is fully managed, the setup time is minimal. Data is moved from Asana to BigQuery in minutes.
  3. Zero data loss: Hevo reliably delivers your data from Asana to BigQuery ensuring that the data is accurate and there is no loss of data.
  4. Automatic schema mapping: Hevo automatically detects schema, any changes, and maps the data accordingly. It will detect any change in incoming Asana schema and make necessary changes in BigQuery, without any human intervention.
  5. 100’s of integrations: In addition to Asana, Hevo can load data from 100’s of additional data sources, ensuring that you easily scale as your business requirements grow.

Sign up for a free 14-day trial with Hevo to move your data from Asana and other data sources to BigQuery seamlessly.

What is your preferred method to move data from Asana to BigQuery? Please share your experience in the comments section.

All your customer data in one place.

Continue Reading