Migrating BigQuery to MySQL involves moving data from a cloud-based analytics platform to a relational database management system (RDBMS). Often, the purpose of such a migration is to leverage MySQL’s cost-effectiveness, customization, and scalability. After all, MySQL is a robust and familiar environment for relational database operations.

You can utilize BigQuery to MySQL migration to streamline data storage, enhance data accessibility, and support various applications and use cases requiring relational data structures. Let’s look into the details of migrating BigQuery to MySQL.

BigQuery: An Overview

BigQuery is Google’s serverless, fully managed, highly scalable data warehouse. As there is no infrastructure to manage, you don’t need a database administrator and can focus on uncovering meaningful insights from your data.

One of the key features of BigQuery is its storage and compute separation, allowing you to scale independently and only pay for what you use. Additionally, BigQuery utilizes the columnar storage format and compression algorithm, storing data in Colossus, Google’s global storage system. This helps with super-fast queries at the petabyte scale by using Google’s infrastructure processing power.

To access BigQuery, you can use the GCP Console, the command line tool bq, make calls to the BigQuery REST API, or use a variety of client libraries such as Python, Java, or .NET.

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

MySQL: An Overview

MySQL is an open-source RDBMS you can utilize to store, manage, and retrieve structured data efficiently. It is widely used for several applications ranging from small-scale projects and large-scale websites to enterprise-level solutions.

For managing and manipulating relational databases, MySQL utilizes SQL, which provides a standardized syntax for querying, updating, and managing database data. You can employ MySQL to store and analyze large volumes of data for business intelligence, reporting, and data analytics.

MySQL offers a rich ecosystem of extensions, plugins, and storage engines, such as InnoDB, MyISAM, and NDB Cluster, to extend its functionality and meet specific requirements. Moreover, it can seamlessly integrate with popular programming languages, frameworks, and platforms.

How to Connect BigQuery to MySQL

To copy data from BigQuery to MySQL, you can employ Hevo Data or manual CSV export/import.

Method 1: Using Hevo Data to Create a BigQuery MySQL Pipeline

If you’re looking for a cost-effective solution to automate data pipelines that are flexible to your needs, Hevo Data is an ideal choice. It is the only real-time ELT data pipeline platform that doesn’t require you to write a single line of code.

Get Started with Hevo for Free

Here are some key features of Hevo:

  • Data Transformation: You can choose between powerful Python-based and drag-and-drop transformations to cleanse and prepare your data for loading into the destination.
  • Automated Schema Mapping: Hevo automatically detects the incoming data format and replicates it to the destination schema. This helps eliminate the tedious task of schema management. Additionally, you can choose between Full and Incremental Mappings to suit your data transformation requirements.
  • Incremental Data Load: By using Hevo, you can transfer data that has undergone real-time modification, ensuring efficient bandwidth utilization on both ends.
  • Built-in Connectors: Hevo supports over 150 data sources, including databases, SaaS platforms, files, and more. For custom sources, there are native webhooks and REST API connectors.

Let’s look into the process of setting up a BigQuery to MySQL ETL using Hevo Data. Before you get started with this process, here’s a list of prerequisites:

  • An active billing account linked to your GCP project.
  • Access to a BigQuery project with at least one table in one or more datasets.
  • Dataset level permissions for Job User, BigQuery Data Viewer, and Data Editor.
  • Access to a GCS bucket in the BigQuery location containing your datasets.
  • Hevo Team Administrator, Team Collaborator, or Pipeline Administrator role.

Step 1: Configure Google BigQuery as a Source

Here are the steps to configure BigQuery as the source end of the pipeline:

  1. In the Navigation Bar, click PIPELINES, followed by + CREATE in the Pipelines List View.
  2. Select Google BigQuery on the Select Source Type page.
  3. Connect to the BigQuery data warehouse from the Configure your BigQuery Account page. Here are the different ways you can do this:
  • Connect with a User Account by doing one of the following:
    • Select any previously configured account and click CONTINUE > + ADD BIGQUERY ACCOUNT.
BigQuery to MySQL: Configure your BigQuery account
  • Configure an account with the following steps:
    • Sign in to your BigQuery account and click Allow to authorize Hevo for data access.
BigQuery to MySQL: Granting Hevo Access
  • Perform the following steps to connect with a Service Account:
    • Select any previously configured account and click CONTINUE.
    • Attach the Service Account Key JSON file created in GCP and click CONFIGURE BIGQUERY ACCOUNT.
BigQuery to MySQL: Provide Service Account Key
  1. When redirected to the Configure your BigQuery Source page, specify the mandatory details.
BigQuery to MySQL: Hevo Configure your BigQuery Source page
  • Pipeline Name: A unique pipeline name that doesn’t exceed 255 characters.
  • Authorized User/Service Account: The email address connected to your Google BigQuery account. This is a pre-filled value.
  • Project ID: The project ID that you want to create the Pipeline for. You can select the required project ID from the drop-down.
  • Select Dataset ID: Select datasets containing the data tables.
  • Advanced Settings: If you enable Include New Tables in the Pipeline, Hevo will automatically ingest any data from tables created after the Pipeline creation.

However, if it is disabled, new tables will be listed in the Pipelines Detailed View in the Skipped state; you can manually include the ones you want and load historical data.

  1. Click on the TEST & CONTINUE button.

For more details about the configuration, read Hevo’s BigQuery source connector documentation.

Step 2: Configure MySQL as a Destination

To configure MySQL as the destination end of the pipeline, start by ensuring the following prerequisites:

  • A running MySQL server. Enter the following command to check if the server is running:
service mysql status

Ensure the status shows as Active: active (running).

  • Use the following command to check if the MySQL version is 5.5 or higher.
mysql -V
  • Grant the database user with the necessary privileges.
  • Team Collaborator or Administrator role except for the Billing Administrator role in Hevo for creating the destination.
1. Whitelist Hevo’s IP Addresses

Permit Hevo to connect to your MySQL database by whitelisting Hevo IP addresses for your region in the MySQL configuration file.

  • Navigate to the MySQL configuration directory and open the server configuration file in edit mode. Here’s an example command:
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
  • Update the bind-address variable in the [mysqld] section in one of the following ways:
    • Allow the IPv4 address to connect to your MySQL destination by specifying:
bind-address = 0.0.0.0
  • Allow your region’s specific Hevo IP addresses by specifying:
bind-address = 10.2.7.152
  • Save the file. For the changes to be effective, restart the MySQL server.
2. Create a Database User with Privileges
  • Use an SQL client tool to log in to your MySQL database instance as a root.
  • Enter the following command by replacing placeholder values with your own:
CREATE USER '<user_name>'@'%' IDENTIFIED BY '<strong password>';
  • Access the MySQL server as a root user. Here’s an example command:
mysql -u root -p
  • Grant privileges to the database user with the following command:
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, INSERT, SELECT, UPDATE ON <database_name>.* to <user_name>@<IP address> IDENTIFIED BY <user_password>;
3. Configure MySQL Connection Settings

Here are the steps to configure MySQL as the destination in Hevo.

  1. Start by clicking DESTINATIONS in the Navigation Bar, followed by + CREATE in the Destinations List View.
  2. Select MySQL on the Add Destination page.
  3. Specify the following details in the Configure your MySQL Destination page.
BigQuery to MySQL: Hevo Configure your MySQL Destination page
  • Destination Name: A unique destination name not exceeding 255 characters.
  • Database Host: The MySQL host DNS or IP address. Alternatively, connect to a local database. Read into the steps to do this in Connecting to a Local Database.
  • Database Port: Default value: 3306. This is the port your MySQL server listens to for connections.
  • Database User: The database user with a non-administrative role you created in the MySQL database.
  • Database Password: The database user password.
  • Database Name: The name of the destination database to load data.
  • Additional Settings:
    • Connect through SSH: If enabled, your MySQL database host’s connection to Hevo will use an SSH tunnel instead of a direct connection.

However, if disabled, you must whitelist Hevo’s IP addresses to allow the Hevo-MySQL host connection.

  • Use SSL: If you’re using an SSL-encrypted connection, enable this option. Specify:
    • CA File: The file that contains the SSL server certificate authority.
    • Client Certificate: The client’s public key certificate file.
    • Client Key: The client’s private key file.
  • Sanitize Table/Column Names: If enabled, all spaces and non-alphanumeric characters in a table or column name are replaced with an underscore.
  1. Click TEST CONNECTION > SAVE & CONTINUE.

For further information about the configuration process, read Hevo’s MySQL destination connector configuration.

If you’re looking to analyze data in MySQL, read MySQL Analytics Tools – A Quick Guide.

SIGN UP HERE FOR A 14-DAY FREE TRIAL

Method 2: Using CSV Export/Import to Load Data from BigQuery to MySQL

This BigQuery MySQL tutorial involves exporting data from BigQuery, performing data transformations to suit the schema and constraints of MySQL, and then importing it into MySQL. Let’s look into the details of this method.

Step 1: Export Data from BigQuery

You can export data from BigQuery to Google Cloud Storage in various formats. The commonly used ones are CSV or JSON formats for compatibility with MySQL. In this tutorial, we’ll look at exporting in CSV format.

To export BigQuery data in CSV format, here’s a sample code:

EXPORT DATA WITH CONNECTION `YourConnectionName`
TO 'gs://your-bucket-name/your-file-name-*.csv'
OPTIONS(
  uri='gs://your-bucket-name/your-file-name-*.csv',
  format='CSV',
  overwrite=true
) AS
SELECT * FROM `your-dataset.your-table`;

Ensure you replace the placeholders as follows:

  • YourConnectionName with a connection name to use for exporting data. This name is for the connection between BigQuery and GCS.
  • gs://your-bucket-name/your-file-name-*.csv is the destination path in GCS where you intend to store the exported CSV files. The wildcard character (*) allows multiple files to be exported.
  • your-dataset.your-table specifies the data to export from BigQuery.

Step 2: Prepare the Data for MySQL

Before loading the data into MySQL, some transformation might be necessary depending on the nature of your data. This could include modifying date formats, restructuring JSON data, or data type conversion. Such transformations ensure alignment with MySQL standards.

  • Restructuring JSON Data: For data exported in JSON format, you may have to flatten nested structures or arrays to fit MySQL’s relational model.
  • Data Type Conversion: Data types in the exported files must be compatible with MySQL. For example, you may have to convert timestamps to a MySQL-supported format.

Step 3: Import Data into MySQL

Use a MySQL client or MySQL’s LOAD DATA INFILE command to import the BigQuery-exported data into MySQL.

Here’s the sample code to use LOAD DATA INFILE for importing CSV files:

LOAD DATA INFILE 'path/to/your-file-name.csv'
INTO TABLE your_mysql_table
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\\n';

The placeholders you need to replace with the essential information include:

  • path/to/your-file-name.csv is the path to your CSV file containing data to load into the MySQL table.
  • your_mysql_table is the name of the MySQL table where you will load data.

This completes the process of loading BigQuery to MySQL using the CSV export/import method.

Limitations of CSV Export/Import for BigQuery to MySQL Migration

  • Effort-intensive: The method requires manual efforts to export data from BigQuery, perform any necessary data transformations, and import it into MySQL. This makes it increasingly time-consuming and error-prone, especially when dealing with large datasets.
  • Lack of Real-time Integration: Owing to the time delays involved in the CSV file export and import, this method lacks real-time integration capabilities. This, in turn, will impact decision-making or real-time analytics.
  • Data Integrity: Since data transformations may be required to align the exported BigQuery data with MySQL standards, there is an increased risk of data integrity issues. There are possibilities of data loss or inaccuracies during the process.

Use Cases of BigQuery to MySQL Integration

  • Cost Optimization: While BigQuery offers a pay-as-you-go solution, you may incur increased storage costs for storing massive data volumes over time. Migrating less critical data to MySQL can help optimize storage costs while maintaining accessibility to essential data.
  • Outstanding Transactional Support: MySQL is among today’s top fast transactional database solutions. Its consistent and automated transaction support and complete data integrity make it an attractive choice for a destination database.
  • Scalability on Demand: MySQL manages deeply embedded applications using a shallow footprint, even in databases storing terabytes of data. It also offers customized solutions to e-commerce enterprises with specific database requirements.
  • Compatibility: MySQL is compatible with several programming languages,and is suitable for a wide range of applications. It offers connectors and drivers for languages like Python, Java, PHP, and more.
  • Customization and Extensibility: The open-source nature of MySQL enables developers to extend the functionality by creating custom storage engines, procedures, and functions. You can use this flexibility to tailor the database to your specific requirements.

Learn how to Connect BigQuery to SQL Server.

Conclusion

The many benefits of MySQL, including cost optimization, wide-spread compatibility, customization, and outstanding transactional support, provide strong reasons to consider a BigQuery to MySQL integration.

To perform this migration, you can opt for the manual CSV export/import technique that involves exporting CSV or JSON data from BigQuery, performing necessary transformations, and loading it into MySQL. However, this method is associated with limitations such as being effort-intensive, time-consuming, and lacking real-time integration capabilities. 

You can overcome such limitations by using Hevo Data for the integration instead. Hevo’s built-in connectors, automated schema mapping, real-time integration, and data transformations allow for a seamless BigQuery to MySQL integration.

Here’s an interesting read to help you with this. Want to know how to migrate MySQL to BigQuery?

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite firsthand. Also checkout our unbeatable pricing to choose the best plan for your organization.

Share your experience of BigQuery to MySQL integration in the comments section below!

Frequently Asked Questions (FAQs)

  1. For a BigQuery to MySQL connection, can I pull data into a local CSV file from a Google Open dataset using BigQuery and Python API?

Yes, you can use BigQuery API’s jobs.insert method to send BigQuery data to a GCS bucket. Then, you can use GCS API’s Objects: get method to download the file locally. Consider using the Tables: get method to fetch the original BigQuery table schema and use it to build the schema for MySQL import.

2. How do I change my BigQuery to MySQL pipeline to only import the newly added data?

To import only new data with a BigQuery to MySQL pipeline, use the MAX(Date) query in the MySQL table to determine the most recent data point. Then, set up a staging table in BigQuery to hold the new data, helping minimize the data volumes that must be exported and subsequently imported. Finally, export the staging table as CSV files and import them into Cloud SQL using the Cloud SQL Admin API.

3. Can I change my Hevo user account to a service account after a data pipeline is created?

No, you cannot change your user account to a service account or vice versa after a data pipeline is created.

mm
Freelance Technical Content Writer, Hevo Data

Suchitra's profound enthusiasm for data science and passion for writing drives her to produce high-quality content on software architecture, and data integration

All your customer data in one place.