One of the most promising capabilities in any enterprise is the implementation of Stream or Batch Processing data from platforms like Dataflow and its storage in a Cloud-based Data Warehouse like Google BigQuery. This is where the concept of Dataflow BigQuery comes into play.

Are you struggling to find a simple step-by-step guide to help you stream data from Dataflow to BigQuery? Do you want a simple fix? If yes? You’ve landed at the right place. Follow this easy guide to help you transfer data from Dataflow to BigQuery in no time. It will help you take charge in a hassle-free way without compromising efficiency.

Upon a complete walkthrough of the content, you will be able to successfully stream your data to BigQuery using Dataflow for a fruitful analysis. It will further help you build a customized ETL pipeline for your organization. Through this article, you will get a deep understanding of the tools and techniques & thus, it will help you hone your skills further.


  • Working knowledge of Google BigQuery.
  • Working knowledge of Dataflow.
  • A Google BigQuery Account.

What is Google BigQuery?

Dataflow to BigQuery: Google BigQuery Logo
Image Source

Google BigQuery is a fully-managed data warehouse offered by Google. It provides users with various features such as BigQuery ML, BigQuery QnA, Connected Sheets, etc. apart from its comprehensive querying layer that delivers exceptional performance and provides a fast querying ability. It also allows users to bring in data from a wide variety of external sources such as Cloud SQL, Google Drive, Sheets, Zendesk, Salesforce, etc.

Automated Data Pipelines like Hevo Data help to perform this Data Replication in a seamless manner.

Some Key Features of Google BigQuery are given below:

  • Scalability: BigQuery offers true scalability and consistent performance using its massively parallel computing and secure storage engine.
  • Data Ingestions Formats: BigQuery allows users to load data in various formats such as AVRO, CSV, JSON etc.
  • Built-in AI & ML: It supports predictive analysis using its auto ML tables feature, a codeless interface that helps develop models having best in class accuracy. BigQuery ML is another feature that supports algorithms such as K means, Logistic Regression etc.
  • Parallel Processing: It uses a cloud-based parallel query processing engine that reads data from thousands of disks at the same time. This is one of the main factors that enable the transfer of data from Dataflow to BigQuery efficiently.

For further information on Google BigQuery, you can check the official website here.

What is Dataflow?

Dataflow to BigQuery: Google Dataflow Logo
Image Source

Dataflow is a fully-managed data processing service by Google that follows a pay-as-you-go pricing model. Being serverless it helps organizations leverage the robust functionality of ETL tools without putting much effort into building and maintaining the ETL tools. It provides various provisions and takes care of all the resources required to carry out data processing operations.

Dataflow, built using Apache Beam SDK, supports both batch and stream data processing. It allows users to set up commonly used source-target patterns using their open-source templates with ease.

For further information on Dataflow, you can check the official website here.

Need for Streaming Data from Dataflow to BigQuery

Dataflow to BigQuery: Database Streaming
Image Source

Google BigQuery, being a fully-managed Data Warehouse service by Google, requires users to bring in data from a variety of sources. A typical enterprise architecture involves updating data from a transactional database in a continuous fashion to ensure that analysts always have up-to-date data in the BigQuery Data Warehouse.

Writing custom codes to manage data loading operations that work with diverse data types is quite challenging, time-consuming, and requires a lot of technical expertise, and hence it is always better to use a fully-managed service like Dataflow to perform such operations. By setting up the transfer of data from Dataflow to BigQuery you can manage all your data in an organized manner.

Stream your Data to Google BigQuery using Hevo’s No-Code Data Pipeline

Hevo Data, an Automated No Code Data Pipeline, helps you transfer data from a multitude of sources to Google BigQuery in a completely hassle-free manner. Hevo’s Pipeline not only transforms your data into an analysis-ready format but also does not require any of its users to write a single line of code! Automated scripts can then be easily incorporated into the solution.

Get Started with Hevo for Free

“By using a Data Pipeline from Hevo, you can reduce your Data Streaming time & effort significantly! In addition, Hevo’s native integration with multiple Data Sources and BI Tools will help you to set up your Streaming platform, Visualize it and gain actionable insights easily!”

Experience an entirely automated hassle-free setup of Data Streaming. Try our 14-day full access free trial today!

Steps to Stream Data from Dataflow to BigQuery

Google provides users with a diverse set of open-source templates to set up a streaming workflow from Dataflow to BigQuery. With Google Dataflows in place, you can create a job using one of the predefined templates to transfer data to BigQuery.

This can be implemented using the following steps:

Step 1: Using a JSON File to Define your BigQuery Table Structure

To start streaming data from Dataflow to BigQuery, you first need to create a JSON file that will define the structure for your BigQuery tables. To do this, create a JSON file outlining the table structure as follows:

    'fields': [{
        'name': name,
        'type': 'STRING'
    }, {
        'name': 'gender',
        'type': 'STRING'
    }, {
        'name': 'age',
        'type': 'STRING',
    }, {
        'name': ‘city,
        'type': 'STRING'

Once you’ve created the JSON file, you need to add some data. To do this, create a file that will act as the data input containing the following information:

“Rock”,”m”,”40”,”new york”

To convert the input file information into the schema of your BigQuery table, create a Javascript function, that will transform each line of your input file to your desired BigQuery table schema as follows:

function transform(line) {
var values = line.split(',');
var custObj = new Object(); = values[0];
obj.gender = values[1];
obj.age = values[2]; = values[3];
var outJson = JSON.stringify(custObj);
return outJson;

Save this Javascript function as “trasformfunction.js” in your Google Cloud Storage. This is how you can configure the schema for your BigQuey tables and transform your data using Javascript.

Step 2: Creating Jobs in Dataflow to Stream Data from Dataflow to BigQuery

The next step in transferring data from Dataflow to BigQuery once you’ve configured your BigQuery table and input data, is to go to the official site of Google Dataflow Console and log in with your credentials such as username and password:

Dataflow to BigQuery: Sign In to Google Cloud Platform
Image Source

The Dataflow Console will now open up on your screen. Click on the create “a job from template” option, found at the top of your screen. The job template window will now open up on your screen as follows:

Dataflow to BigQuery: Configuring Job Template
Image Source

Configure the job template by providing the following information carefully:

  • Dataflow Template: Here, you need to select the “Text files on Cloud Storage to BigQuery” option from the drop-down list.
  • Required Parameters:  Here, you need to provide the location of the Javascript and JSON schema file, that you have created in the previous steps.

To finish configuring your Dataflow job template, you now need to provide the name of the target BigQuery table, the location of your text file and information about your temporary directory locations.

Once you’ve provided all the necessary information, select the Google Managed Key for encryption and click on the run job option. You can now track the progress of the job using the data console, and once the job finishes, you can log in to your BigQuery account and check the target table.

This is how you can use built-in job templates to stream data from Dataflow to BigQuery.

Limitations of the Above Method

Although the above Method of transferring data from Dataflow to BigQuery works, it has multiple limitations:

  • The above approach involved only a few clicks because of the built-in templates. These templates exist only for the most common scenarios and mainly capture scenarios that involve Google-based source and target databases.
  • Dataflow Integration to cloud-based services is not great. In case you are using a service like Marketo or Salesforce, you will have a tough time loading the data to BigQuery since it involves accessing the service APIs to capture data.
What makes Data Streaming using Hevo’s Data Pipeline Unique

As shown above, the Manual Method of transferring data from Dataflow to BigQuery has a heavy dependency on built-in templates, and Integration services for Cloud-based applications that are not supported well. Hence, the need for Automated No-Code Pipelines like Hevo is the need of the hour.

Hevo Data, an Automated No Code Data Pipeline, is fully managed and completely automates the process of not only loading your data into BigQuery but also enriches the data and transforms it into an analysis-ready form without having to write a single line of code.

Hevo offers you the option to load data to your BigQuery Destination in near real-time by streaming the write operations. This offers the advantage of making data available for querying without having to wait for a job to finish loading data into BigQuery.

Check out what makes Data Streaming using Hevo amazing:

  • Integrations: Hevo’s fault-tolerant Data Pipeline offers you a secure option to unify data from 150+ data sources (including 50+ free sources) and store it in Google BigQuery or any other Data Warehouse of your choice.
  • Auto Schema Mapping: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the BigQuery schema.
  • Quick Setup: Hevo with its holistic features, can be set up efficiently. Moreover, with its simple and interactive UI, it is extremely easy for new customers to work on and perform operations.
  • High Scalability: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.

With a constant movement of data in Real-Time, Hevo allows you to combine data from a data source of your choice and seamlessly load it to Google BigQuery seamlessly.

Load Data to BigQuery with Hevo for Free


This article thought you how to stream data from Dataflow to BigQuery. It provides in-depth knowledge about the concepts behind every step to help you understand and implement them efficiently. These steps, however, are manual and can be very taxing. You will need to implement them manually, which will consume your time & resources, and writing custom scripts can be error-prone. Moreover, you need full working knowledge of the backend tools to successfully implement the in-house Data transfer mechanism. You will also have to regularly map your new files to the BigQuery Data Warehouse.

Hevo Data provides an Automated No-code Data Pipeline that empowers you to overcome the above-mentioned limitations. Hevo caters to 150+ data sources (including 50+ free sources) and can seamlessly ETL your data to BigQuery within minutes. Hevo’s Data Pipeline enriches your data and manages the transfer process in a fully automated and secure manner without having to write any code. It will make your life easier and make data migration hassle-free.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. Also, check out our transparent pricing page for more information.

Tell us about your experience of streaming data from Dataflow to BigQuery! Let us know in the comments section below.

Software Developer, Hevo Data

Talha is a seasoned Software Developer, currently driving advancements in data integration at Hevo Data, where he have been instrumental in shaping a cutting-edge data integration platform for the past four years. With a significant tenure at Flipkart prior to their current role, he brought innovative solutions to the space of data connectivity and software development.

No-code Data Pipeline For BigQuery