Working with Google BigQuery Avro: 3 Easy Steps

on Avro, Data Warehouse, ETL Tutorials, Google BigQuery • March 11th, 2022 • Write for Hevo

When you load Google BigQuery Avro data, you have the option of creating a new table or partition or appending to or overwriting an existing table or partition. When you import your data into BigQuery, it is transformed to columnar format for Capacitor (BigQuery’s storage format).

When you load data from Cloud Storage into a BigQuery table, the dataset containing the table must be located in the same regional or multi-regional location as the Cloud Storage bucket.

In this blog, you’ll learn how you can start working with Google BigQuery Avro and an overview of the platform’s core features, respectively.

Table of Contents

What is Google BigQuery?

BigQuery Avro - Google BigQuery logo
Image Source

Google BigQuery was designed to be a Versatile, Quick, and Powerful Data Warehouse that is strongly connected with the other Google Platform services. It has a Serverless Model, user-based pricing, and is reasonably priced. Google BigQuery’s Analytics and Data Warehouse platform leverages a built-in Query Engine on top of the Serverless Model to process terabytes of data in seconds.

You can execute analytics at scale with Google BigQuery at a cheaper three-year TCO of 26% to 34% than comparable Cloud Data Warehouse solutions. Because there is no infrastructure to manage or set up, you can focus on generating relevant insights with Standard SQL and flexible pricing models such as Flat-rate and On-demand.

The Data Warehouse’s speed and capacity to manage enormous volumes of data were boosted by Google BigQuery’s column-based Storage solution. Because Column-based Storage allows you to process only the columns of interest, you can acquire answers faster and make better use of resources. Standard SQL Access can be provided by a variety of Business Intelligence solutions that can be linked with Google BigQuery. As a result, it is more advantageous to store data in analytical databases by column.

Key Features of Google BigQuery

Here are a few significant characteristics of Google BigQuery:

  • Architecture that is scalable: Google BigQuery offers a scalable design and a petabyte-scale system that customers can scale up and down based on load.
  • Processing time is reduced: Google BigQuery can process petabytes of data in less time than many traditional systems due to its scalable architecture. Users can utilize Google BigQuery to analyze millions of rows of data without worrying about scalability.
  • Completely Managed: Because Google BigQuery is a product of Google Cloud Platform, it offers completely managed and serverless solutions.
  • Security: Google BigQuery provides the highest level of security, safeguarding data both at rest and in transit.
  • Ingestion of Real-Time Data: Google BigQuery has become well-known across all IoT and Transaction platforms due to its ability to execute real-time data analysis.
  • Tolerance for Mistakes: You can duplicate data across many zones or regions using Google BigQuery. When a region/zone goes down, it ensures that data is always available.
  • Auto-Backup: Google BigQuery automatically offers backup and recovery solutions to secure data.

What is Apache Avro?

BigQuery Avro - Apache Avro logo
Image Source

Apache Avro is a Row-Oriented remote procedure Call and Data Serialization technology created as part of the Apache Hadoop project. It defines data types and protocols using JSON and serializes data in a compact binary format.

Its principal application is in Apache Hadoop, where it can serve as a serialization format for persistent data as well as a wire format for communication between Hadoop nodes and from client programs to Hadoop services. Apache Avro structures the data that is being encoded using a schema. It features two types of schema languages: one for human editing (Avro IDL) and another based on JSON that is more machine-readable.

An Apache Avro Object Container File is made up of a file header is followed by one or more data blocks from the file.

Key Features of Apache Avro

Here are some features to keep an eye on:

  • Avro Schema Provides Robustness in the Future: Avro Schemas provides future-proofing for our data while also making it more robust by supporting all past (older Consumers), future (Big Data), and current use cases. Essentially, the schema with its support for evolution is required to make the data robust for streaming architectures.
  • JSON Resources: Because of its JSON-defined Avro schemas, Avro can be implemented in all languages that already have JSON libraries.
  • Data Structures that are Rich: Apache Avro offers complex data structures in addition to a compact, fast binary data format. Because of its extensive data structures, it can generate a record that can include an array, an enumerated type, and a sub-record.
  • Language Assistance: There are numerous language implementations available, including C, C++, C#, Java, PHP, Python, and Ruby.
  • Compression with Transparency: We don’t need to manually compress Avro files because the framework makes some size improvements.
  • Typing on the Fly: Serialization and deserialization occur without the need for code production. As a result, dynamic typing supplements code generation, which is available in Avro as an optional optimization for statically typed languages.

Simplify Google BigQuery ETL & Analysis with Hevo’s No-code Data Pipeline

A fully managed No-code Data Pipeline platform like Hevo Data helps you integrate and load data from 100+ Different Sources (including 40+ Free Sources) to a Data Warehouse such as Google BigQuery or Destination of your choice in real-time in an effortless manner. Hevo with its minimal learning curve can be set up in just a few minutes allowing the users to load data without having to compromise performance. Its strong integration with umpteenth sources allows users to bring in data of different kinds in a smooth fashion without having to code a single line. 

Get Started with Hevo for Free

Check out some of the cool features of Hevo:

  • Completely Automated: The Hevo platform can be set up in just a few minutes and requires minimal maintenance.
  • Transformations: Hevo provides preload transformations through Python code. It also allows you to run transformation code for each event in the Data Pipelines you set up. Hevo also offers drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
  • Connectors: Hevo supports 100+ integrations to SaaS platforms, files, Databases, analytics, and BI tools. It supports various destinations including Google BigQuery, AmazonRedshift, Snowflake Data Warehouses; Amazon S3 Data Lakes; MySQL, SQL Server, TokuDB, DynamoDB, PostgreSQL Databases to name a few.  
  • Real-Time Data Transfer: Hevo provides real-time data migration, so you can have analysis-ready data always.
  • 100% Complete & Accurate Data Transfer: Hevo’s robust infrastructure ensures reliable data transfer with zero data loss.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (including 40+ free sources) that can help you scale your data infrastructure as required.
  • 24/7 Live Support: The Hevo team is available round the clock to extend exceptional support to you through chat, email, and support calls.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Live Monitoring: Hevo allows you to monitor the data flow so you can check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!

Loading Google BigQuery Avro Data

Before you start with Google BigQuery Avro, Check them!

  • Grant Identity and Access Management (IAM) roles provide users the access they need to complete the tasks in this document.
  • IAM permissions are required to perform a load operation and load data into BigQuery tables and partitions. You will also need IAM permissions to access the bucket that contains your data if you are loading data from Cloud Storage.

Here are the 3 steps to get started with

Google BigQuery Avro Step 1: Data loading permissions in Google BigQuery

The following IAM permissions are required to load data into a new BigQuery table or partition, or to append or overwrite data in an existing table or partition:

  • bigquery.tables.create
  • bigquery.tables.updateData
  • bigquery.tables.update
  • bigquery.jobs.create

You must have the necessary IAM rights to load data from a Cloud Storage bucket:

  • storage.objects.
  • obtain the storage.objects.list (required if you are using a URI wildcard)
  • The IAM role roles/storage has been predefined.

All of the permissions required to load data from a Cloud Storage bucket are included in objectViewer. The IAM role roles/storage has been predefined. All of the permissions required to load the data from a Cloud Storage bucket are included in objectViewer.

Google BigQuery Avro Step 2: Schemas for Avro

The table schema is automatically retrieved using the source data when you import Avro files into a new Google BigQuery Avro table. When Google BigQuery obtains the schema from the source data, it uses the last file in alphabetical order.

Google BigQuery Avro Step 3: Avro data is being loaded into a new table

Avro data can be loaded into a new table in the following ways:

  • Making use of the Cloud Console.
  • Using the bq command-line tool and the bq load command.
  • The jobs are being called.
  • Inserting an API call and set up a load job
  • Making use of the client libraries.

To import Google BigQuery Avro data from Cloud Storage into a new BigQuery table, follow these steps:

  • Open the BigQuery page in the Cloud Console.
  • Expand your project in the Explorer panel and choose a dataset.
  • Click Open after expanding the more vert Actions option.
  • Click Create table add box in the details panel.
  • In the Source section of the Create table page:
  • Choose Google Cloud Storage as the source for your table.
BigQuery Avro - Google BigQuery Create Table
Image Source

In the Destination section of the Create table page:

  • Select the suitable dataset for the Dataset name.
BigQuery Avro - Dataset name
Image Source
  • Check that the Table type is set to Native table.
  • Enter the name of the table you’re creating in BigQuery in the Table name field.

There is no need to take any action in the Schema section. In Avro files, the schema is self-described.

To require users to include a WHERE clause that defines the partitions to query in the Partitioning filter, check the Require partition filter box. The requirement of a partition filter may save costs while improving performance.

  • Enter one to four field names in the Clustering order box to cluster the table.
    Navigate to the Advanced tab.
  • Leave Write if empty selected for Write choice. This option generates a new table and populates it with your data.
  • To use a Cloud Key Management Service key for encryption, select a Customer-managed key.
  • If you do not use the Google-managed key, Google BigQuery encrypts the data at rest.
  • Click the Create table button.

Google BigQuery Avro Conversions

Apache Avro translates data formats to the Google BigQuery data types listed below:

BigQuery Avro - Avro Conversions
Image Source: Self

For most kinds, BigQuery ignores the logicalType attribute and instead utilizes the underlying Avro type. Specify the —use Apache Avro logical types flag in the bq command-line tool to true to convert Avro LogicalTypes to their corresponding BigQuery data types, or set the use Avro LogicalTypes property in the job resource when calling the jobs. To construct a load task, use the insert technique.

Benefits of Google BigQuery Avro Data

  • Google BigQuery Avro binary data format is small and speedy.
  • A container file is used to store persistent data.
  • Google BigQuery Avro Integration with dynamic languages is simple. It is not necessary to generate code to read or write data files or to use or implement RPC protocols.
  • Code generation is an optional optimization that is only useful for statically typed languages.

Conclusion

AVRO is the quickest! Use AVRO if you need to import your files into BigQuery as rapidly as feasible. However, if you’re generating your files outside of GCP (or need to keep a duplicate of the files on GCS), consider using PARQUET because of its substantially smaller file sizes and relatively speedy load speeds.

Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations such as Google BigQuery with a few clicks.

Visit our Website to Explore Hevo

Hevo Data with its strong integration with 100+ Data Sources (including 40+ Free Sources) allows you to not only export data from your desired data sources & load it to the destination of your choice such as Google BigQuery, but also transform & enrich your data to make it analysis-ready so that you can focus on your key business needs and perform insightful analysis using BI tools. 

Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You may also have a look at the amazing pricing, which will assist you in selecting the best plan for your requirements.

Share your experience of Working with Google BigQuery Avro in the comment section below! We would love to hear your thoughts.

No-code Data Pipeline for Google BigQuery