Are you looking for simple ways to move your data from Asana to BigQuery? Are you confused between building a custom data pipeline or using an automated solution to do this? Read on to find the best way for you to transfer data from Asana to BigQuery.

Moving data from Asana to Bigquery can help you to analyze, take necessary actions to meet the deadlines, analyze productivity, and forecast plans. With the help of Google BigQuery’s petabyte storage and exceptional query analytics and machine learning platform, you can identify the pros and cons that would help to boost the efficiency and productivity of the resources quickly.

This blog talks about the two methods you can use to set up a connection from Asana to BigQuery: using custom ETL scripts and with the help of a third-party tool, Hevo.

Introduction to Asana

Asana Logo
Image Source

Asana is a work management platform available as a cloud service that helps users to organize, track, and manage their tasks. Asana helps to plan and structure work in a way that suits organizations. All activities involved in a typical organization, right from project planning to task assignment, risk forecasting, assessing roadblocks, and changing plans can be handled on the same platform. All this comes at a very flexible pricing plan based on the number of users per month. There is also a free version available for teams up to a size of 15. Almost 70000 organizations worldwide use Asana for managing their work. 

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

Introduction to BigQuery

BigQuery Logo
Image Source

Google BigQuery is Google’s cloud-based enterprise data warehouse which is columnar in nature. You can run SQL queries to find meaningful insights through its web UI or you can use any other command-line tool as well.

Since Asana is closely coupled with the day-to-day activities of the organizations, it is only natural that the organizations will want to have the data imported into their databases or data warehouses for analysis and building insights. In this blog post, you will learn to load data from Asana to BigQuery – one of the most widely used completely managed data warehouse services.

Prerequisites

You need to have the following prerequisites in place before going ahead:

  • A GCP project and service account with BigQuery Editor and Cloud Run admin access.
  • Asana API to get the data out of Asana.
  • Basic understanding of Python and cron jobs.
Asana to BigQuery: Methods to move data

Method 1: Writing Custom Scripts to Move Data from Asana to BigQuery

In this method, you will write custom code and schedule it for intervals to move your data from Asana to Bigquery. To achieve data consistency and ensure no discrepancies arise, we will have to constantly monitor and invest in maintaining the infrastructure.

Method 2: Using Hevo Data to Move Data from Asana to BigQuery

Hevo, a No-code Data Pipeline, can help you automate the process of data transfer from Asana to BigQuery without writing any code. You can utilize its plug-and-play platform to set the data moving in a few minutes with 100% accuracy and zero data loss.

Get Started with Hevo for Free

Understanding the Methods to Connect Asana and BigQuery

Here are the methods you can use to connect Asana and BigQuery in a seamless fashion:

Method 1: Writing Custom Scripts to Move Data from Asana to BigQuery

Here are the steps involved in moving data from Asana to Bigquery:

Step 1: Extract data from Asana

In this method, you need to write code for some or all of Asana’s APIs to extract data. If you are looking to get updated data periodically, make sure the code can fetch incremental data using a cron job. Store this data into Cloud Bucket. Ensure that the incremental files that are getting generated are uniquely identified.

Example of Asana API request:

 curl https://app.asana.com/api/1.0/users/me 
  -H "Authorization: Bearer 0/a7f89e98g007e0s07da763a"

Example of Asana API JSON Response:

{
   "data":{
      "gid":"1167857344439602",
      "email":"XXXXXXXX@gmail.com",
      "name":"Lahu Bhawar",
      "photo":null,
      "resource_type":"user",
      "workspaces":[
         {
            "gid":"1167857278848929",
            "name":"Engineering",
            "resource_type":"workspace"
         }
      ]
   }
}

Step 2: Create a BigQuery Table

Create tables and columns in BigQuery and map Asana’s JSON files to this schema. While doing this, ensure that there is data type compatibility between Asana and BigQuery.

Step 3: Create a View in BigQuery and Load Data

BigQuery is not designed for line-by-line updates or SQL “upsert” operations. To find the latest data, create a View on top of the data and fetch the latest data. Since Bigquery can handle Petabytes of data for analysis there would be no issue. Also, ensure that the View and tables are optimized as per BigQuery computing requirements. You can find more details for optimization in this link: Performance/Optimization

Step 4: Load Data using BigQuery API

Cloud Storage Bucket is the cloud storage where the files to be loaded into BigQuery are stored. To insert the data from Cloud Storage Bucket to Bigquery, you can use BigQuery API. Following are the steps to do this:

  • Upload the file to Google Cloud Storage Bucket using the below command.
gsutil cp source_path/file_name gs://target_path/
  • Using the bq load utility, load the response file to the BigQuery table.
bq load 
    --source_format=NEWLINE_DELIMITED_JSON 
    --time_partitioning_field mytimestamp 
    target_dataset.target_table 
    gs://gcs_bucket_name/asanasourcedata.json 
    ./targetschema.json
  • Ensure to create the table partitioned in BigQuery based on DateColumn and cluster the table based on Partition Column and Ticket Number.

Step 5: Monitor Schema Mapping

In addition to this, you need to make sure that there is constant monitoring to detect any change in the Asana schema. You can use StackDriver Metric and alerting to detect any schema changes. You would need to modify and update the code if there is any change in the incoming data structure.

Cloud Run is a containerized way to deploy your code and run it using a cron job. Since Cloud Run is a serverless platform, we have to concentrate on our code instead of the data infrastructure.

Limitations of using Custom Scripts to Connect Asana to BigQuery

  • Before working on this approach you need to be familiar with Cloud Build or Docker to deploy your code.
  • You need to monitor the Asana APIs for any changes and modify your data table accordingly in case of any changes. You will also need to modify the code and redeploy it.
  • You need to put a data validation step also to ensure that the custom scrip is moving data properly.
  • You need an alerting mechanism using StackDriver Alert so that whenever the API call to load data fails, you get an alert and can debug.
  • You will need dedicated engineering bandwidth to monitor this process and make any changes as required.

Method 2: Using Hevo Data to Move Data from Asana to BigQuery

Hevo Logo
Image Source

Hevo, a No-code Data Pipeline, is an easy solution to move data from Asana to BigQuery without having to go through any hassle of writing custom code. It can help you step over all the above limitations by loading data from Asana to BigQuery in two simple steps:

  • Step 1: Authenticate and connect Asana as a source
  • Step 2: Configure the BigQuery table where you want to load data and start moving data

Hevo’s fault-tolerant, dependable data integration platform will ensure that your data is securely moved from Asana to BigQuery in real-time.

Sign up here for a 14-Day Free Trial!

The Hevo Advantage: 

  • Code-free data loading: You need not invest any engineering resources to write and maintain any ETL scripts or cron jobs. There is no need for constant monitoring and maintenance of the data transfer process.
  • Set up in minutes: Since Hevo is fully managed, the setup time is minimal. Data is moved from Asana to BigQuery in minutes.
  • Transformations: Hevo provides preload transformations through Python code. It also allows you to run transformation code for each event in the pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. Hevo also offers drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
  • Zero data loss: Hevo reliably delivers your data from Asana to BigQuery ensuring that the data is accurate and there is no loss of data.
  • Automatic schema mapping: Hevo automatically detects schema, any changes, and maps the data accordingly. It will detect any change in the incoming Asana schema and make necessary changes in BigQuery, without any human intervention.
  • 100’s of integrations: In addition to Asana, Hevo can load data from 100+ additional data sources, ensuring that you easily scale as your business requirements grow. It supports various destinations including Google BigQuery, Amazon Redshift, Snowflake Data Warehouses; Amazon S3 Data Lakes; and MySQL, SQL Server, TokuDB, DynamoDB, PostgreSQL databases to name a few.  

Conclusion

This blog talks about the two methods you can use to set up a connection from Asana to BigQuery: using custom ETL scripts and with the help of a third-party tool, Hevo. It also gives a brief overview of Asana and BigQuery highlighting their key features and benefits before diving into the setup process.

Visit our Website to Explore Hevo

These methods, however, can be challenging especially for a beginner & this is where Hevo saves the day. Hevo Data, a No-code Data Pipeline, helps you transfer data from a source of your choice in a fully automated and secure manner without having to write the code repeatedly. Hevo, with its strong integration with 100+ sources & BI tools, allows you to not only export & load data but also transform & enrich your data & make it analysis-ready in a jiff.

Want to take Hevo for a spin? Sign Up here for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

What is your preferred method to move data from Asana to BigQuery? Please share your experience in the comments section.

Suraj Poddar
Principal Frontend Engineer, Hevo Data

Suraj has over a decade of experience in the tech industry, with a significant focus on architecting and developing scalable front-end solutions. As a Principal Frontend Engineer at Hevo, he has played a key role in building core frontend modules, driving innovation, and contributing to the open-source community. Suraj's expertise includes creating reusable UI libraries, collaborating across teams, and enhancing user experience and interface design.

No-code Data Pipeline For BigQuery