Working with BigQuery Python Client Simplified: 6 Easy Steps

on bigquery datasets, Data Integration, Data Warehouse, ETL Tutorials, Google BigQuery • September 17th, 2021 • Write for Hevo

Analyzing the data stored in On-Premise Data Warehouses can be a time-consuming task, especially if your data is being generated at an exponential rate and requires continuous scaling to ensure optimal performance. This is one of the biggest reasons that companies today are adopting Cloud-based Data Storage solutions. One such solution is Google BigQuery which is one of the most used and widely accepted Data Warehousing solutions.

Organizations that use Google BigQuery to store large amounts of data often need to access the data through programming languages. The typical requirement for this arises from the need to transform data into various forms suitable for consumption and to move data to other Databases for certain use cases. Python is a very popular language choice for such requirements because of its data manipulation capabilities and ease of integration with data processing frameworks like Spark. There is a BigQuery Python Client Library available for the users to query datasets in Google BigQuery in a seamless manner.

Upon a complete walkthrough of this article, you will gain a decent understanding of Google BigQuery along with the unique features that it offers. You will also learn about the steps required to leverage Google BigQuery Python Client Library to query BigQuery Public Datasets using Python. Read along to learn more about Google BigQuery Python Client Library!

Table of Contents

Prerequisites

  • Basic Hands-on experience with Google Cloud Console.
  • Basic Knowledge of Python.

Introduction to Google BigQuery

Google BigQuery Logo
Image Source

Google BigQuery is a fully managed Cloud-based Data Warehouse that enables you to manage and analyze your data with built-in capabilities such as Machine learning, Geospatial Analysis, and Business Intelligence. Google BigQuery’s Serverless architecture allows you to use SQL queries to answer your company’s most critical questions without worrying about Infrastructural administration. With Google BigQuery’s Scalable and Distributed Analytics Engine, you can query Terabytes and Petabytes of data in a span of just a few minutes.

Google BigQuery Interfaces include the Google Cloud Console and the Google BigQuery Command-Line tool. Developers and Data Scientists can use Client libraries in Programming Languages such as Python, Java, JavaScript, and Go, as well as the REST APIs to transform and manage data. You will see one such example(BigQuery Python Client Library) later in this article.

Key Features of Google BigQuery

Key Features of Google BigQuery
Image Source

Some of the unique features offered by Google BigQuery are as follows:

  • Scalability: To provide consumers with true Scalability and consistent Performance, Google BigQuery leverages Massively Parallel Processing(MPP) and a Highly Scalable Secure Storage Engine. The entire Infrastructure with over a thousand machines is managed by a complex software stack.
  • Storage: Google BigQuery uses a Columnar architecture to store datasets of mammoth scales. Column-based Storage has several advantages, including better Memory Utilization and the ability to scan data faster than typical Row-based Storage.
  • Serverless: Google BigQuery’s Serverless model automatically distributes processing across a large number of machines running in parallel, so any company using Google BigQuery can focus on extracting insights from data rather than configuring and maintaining the Infrastructure/Server. 
  • Integrations: Google BigQuery as part of the Google Cloud Platform (GCP) supports seamless integration with all Google products and services. Google also offers a variety of Integrations with numerous third-party services, as well as the functionality to integrate with application APIs that are not directly supported by Google.

For further information on Google BigQuery, click here to check out their official website.

Introduction to Cloud Client Libraries

Cloud Client Libraries are the recommended way to programmatically access Google Cloud APIs. Cloud Client Libraries support access to Google Cloud Services in a way that greatly reduces the repetitive code you have to write. These Libraries provide high-level abstraction which is why they are easy to understand. Cloud Client libraries also work well with the standard libraries and integrate better with your codebase. In the later section of this article, you will learn about the steps required to use BigQuery Python Client Library to query datasets in Google BigQuery.

Simplify Google BigQuery ETL and Analysis with Hevo’s No-code Data Pipeline

A fully managed No-code Data Pipeline platform like Hevo Data helps you integrate and load data from 100+ different sources (including 40+ free sources) to a Data Warehouse such as Google BigQuery or Destination of your choice in real-time in an effortless manner. Hevo with its minimal learning curve can be set up in just a few minutes allowing the users to load data without having to compromise performance. Its strong integration with umpteenth sources allows users to bring in data of different kinds in a smooth fashion without having to code a single line. 

Get Started with Hevo for free

Check out some of the cool features of Hevo:

  • Completely Automated: The Hevo platform can be set up in just a few minutes and requires minimal maintenance.
  • Transformations: Hevo provides preload transformations through Python code. It also allows you to run transformation code for each event in the Data Pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. Hevo also offers drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
  • Connectors: Hevo supports 100+ integrations to SaaS platforms, files, Data Warehouses, Databases, and BI tools. It supports various destinations including Google BigQuery, Amazon Redshift, Firebolt, Snowflake Data Warehouses; Amazon S3 Data Lakes; and MySQL, SQL Server, TokuDB, DynamoDB, PostgreSQL Databases to name a few.  
  • Real-Time Data Transfer: Hevo provides real-time data migration, so you can have analysis-ready data always.
  • 100% Complete & Accurate Data Transfer: Hevo’s robust infrastructure ensures reliable data transfer with zero data loss.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (including 40+ free sources) that can help you scale your data infrastructure as required.
  • 24/7 Live Support: The Hevo team is available round the clock to extend exceptional support to you through chat, email, and support calls.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Live Monitoring: Hevo allows you to monitor the data flow so you can check where your data is at a particular point in time.
Sign up here for a 14-day Free Trial!

Steps to Follow Before using BigQuery Python Client Library

There is a certain sequence of steps that you need to follow before using the Google BigQuery Python Client Library. They are as follows:

Step 1: Create a Cloud Platform Project

If you don’t have a pre-existing Cloud Platform Project in your Google Cloud Console then you have to create one before using Google BigQuery Python Client Library. Follow the steps given below to create a new Cloud Platform Project.

  • Sign In to the Google Cloud Console and click on the option Select a Project.
  • Click on the option New Project to create a new project.
  • Enter the name for your project and then click on Create.
New Cloud Platform Project
Image Source

Step 2: Enable Billing for your Cloud Platform Project

When you create a new Cloud Platform Project, you will be asked to choose which of your Cloud Billing accounts you would like to link to the project that you have created. If you only have one Cloud Billing account, then that account is automatically linked to your project. If you don’t have a Cloud Billing account, you’ll need to create one and turn on billing for your project before you can leverage features of the BigQuery Python Client Library. Follow the steps given below to create a new Cloud Billing account:

  • Sign In to the Manage Billing Accounts page in the Google Cloud Console.
Enable Billing before using BigQuery Python Client
Source: Self
  • Click on Create Account and enter the name for your Cloud Billing Account.
  • Depending on the configuration, you might be asked to fill either of the following two fields- Organization or Country Name.
  • Click on Continue.

Step 3: Enable the Google Cloud BigQuery API

Google BigQuery API is enabled by default for all Cloud Platform Projects. You can use the following command to check whether Google Cloud BigQuery API is enabled for your project or not:

gcloud services list

If the API is not enabled, you can paste the following command in the Command Shell to enable it:

gcloud services enable bigquery.googleapis.com

Step 4: Set up Authentication

Google Cloud APIs support multiple authentication flows for different runtime environments. For the best developer experience, you can use the Google Cloud Client Libraries with Google Cloud APIs. They use the authentication libraries that are provided by Google to support a variety of authentication flows and runtime environments. You can follow the general steps given below to build an application using Google Cloud APIs and before using BigQuery Python Client Library:

  • Select the Google Cloud Client Library for the language you are most comfortable using.
  • Determine the correct authentication flow for the application you want to build.
  • Now you need to create the application credentials needed for your application.
  • Pass the application credentials to the Client Libraries when the application starts, ideally through Application Default Credentials (ADC).

Steps to Query Datasets using BigQuery Python Client Library

You can follow the steps given below to query datasets using Google BigQuery Python Client Library:

  • Step 1: The first step in connecting Google BigQuery to any Programming Language is to configure the required dependencies. The first step is to install the BigQuery Python Client in a virtual environment using pip.virtualenv. pip.virtualenv is a tool that is used to create virtual Python environments. With a virtual environment, it’s possible to install the BigQuery Python Client Library without needing System Install Permissions, and without clashing with the installed system dependencies. Use the following code lines to install the BigQuery Python Client library:

Note: Google BigQuery Python Library supports Python Versions 3.5 and later.

For Windows Operating System

pip install virtualenv
virtualenv <your-env>
<your-env>Scriptsactivate
<your-env>Scriptspip.exe install google-cloud-bigquery

For Mac/Linux

pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-cloud-bigquery
  • Step 2: Once you have successfully installed the BigQuery Python Client Library, you will now use it to create a simple script to access the data from one of the Public Datasets available in Google BigQuery. To do so, you need to import Google BigQuery by using the following command.
from google.cloud import BigQuery
  • Step 3: Initialize the client by using the following command.
Sample_client = BigQuery.Client()
  • Step 4: Form the query as follows. The query will access a Public Dataset in BigQuery that has data about names in the USA and will return the data associated with state ‘TX’.
QUERY = """
    SELECT 
    FROM `BigQuery-public-data.usa_names.usa_1910_2013`
    WHERE STATE=’TX’ LIMIT 100
"""
  • Step 5: Once you have written the Query, it is time to execute the same. Query Execution in Google BigQuery is called a Job. Since Query Executions sometimes take longer than expected to execute, they are referred to with the term job. Initiate the job as follows:
results = Sample_client.query(QUERY)
  • Step 6: The final step requires you to print the result using a loop.
for row in results:
    print(row.name)

Once you follow all the steps in the correct sequence, you will be able to query datasets using BigQuery Python Client Library.

Conclusion

This article provided you a step-by-step guide to leverage BigQuery Python Client Library to query Google BigQuery datasets using Python. You also learned about the points that you should consider before using BigQuery Python Client Library. With your Data Warehouse, Google BigQuery, live and running, you’ll need to extract data from multiple platforms to carry out your analysis. However, Integrating and analyzing your data from a diverse set of data sources using custom ETL Scripts can be challenging and this is where Hevo Data comes into the picture.

Visit our Website to Explore Hevo

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations such as Google BigQuery, with a few clicks. Hevo Data with its strong integration with 100+ sources (including 40+ free sources) allows you to not only export data from your desired data sources & load it to the destination of your choice, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Share your experience of learning about the Google BigQuery Python Client Library. Let us know in the comments section below!

No-code Data Pipeline for Google BigQuery