Today’s business landscape is becoming increasingly dependent on data. You need to harness the right tools to simplify the data in your databases and unlock its potential. 

Migrating data from BigQuery to SQL Server is a powerful data integration method. SQL Server gives you control over storage procedures, query optimization, and user-defined functions, providing a better analysis of BigQuery data. This article will give you insights on migrating your data from BigQuery to SQL Server. 

Let’s look at some methods to load data from BigQuery to SQL Server. 

Overview of BigQuery

BigQuery is a serverless, cloud-based data warehouse. It assists you in storing, managing, and analyzing your data through machine learning, geospatial analysis, and business intelligence features.

The platform supports data ingestion from other Google services such as Google Sheets, Analytics, and Cloud Storage. You can process data and generate answers using SQL queries to obtain quick and logical insights.

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

Overview of SQL Server

SQL Server is a database system that stores structured, semi-structured, and unstructured data. It supports languages like Python and helps you extract data from different sources, sync it, and maintain consistency. The database provides controlled access to the data, making SQL Server more secure and ensuring regulatory compliance.

If you are looking for a BigQuery SQL Server tutorial to migrate data, read ahead and learn about methods that will help you in the process.

Method 1: Migrating Data from BigQuery to SQL Server Using Hevo

Hevo is a real-time ELT data integration platform that provides you the flexibility to set up your data warehouse or database for analysis with cost-effective and no-code pipelines. This multi-tenant platform allows you to load data, perform data integration, and prepare your data for analysis. 

Benefits of Using Hevo 

  • Data Transformation: Hevo’s data integration features, including transformations such as drag-and-drop, enable you to load your data smoothly in the BigQuery SQL Server pipeline. 
  • Auto Schema Mapping: When performing BigQuery to SQL Server integration, Hevo maps the schema of the data automatically to ensure data compatibility. This feature greatly reduces human intervention.
  • Models and Workflow: Hevo assists you in making the data ready for analysis through this in-built intelligent assistant, providing complete control and visibility over the data. This feature is useful when connecting BigQuery to SQL Server, as it speeds up data loading.

Steps to Create Data Pipeline to Load Data from BigQuery to SQL Server

Let’s examine the method for creating a BigQuery to SQL Server ETL  (Extraction, Transformation, and Loading) pipeline.

Step 1: Set BigQuery as Your Source

Organizing Data in BigQuery

BigQuery stores data in projects. You must organize your data in BigQuery by associating your pipeline with the BigQuery project. Remember, you can only associate a project with a single data pipeline.

BigQuery to SQL Server: Projection of data flow in BigQuery Project
Projection of data flow in BigQuery Project
Granting Permissions

To streamline BigQuery and GCS data, you must grant permission to authenticate the Hevo account. In the next step, the data files written in GCS get deleted. For more information regarding granting permissions, read Google Authentication Methods.

Strategies for Data Replication 

Hevo replicates data from your Google BigQuery source using two methods. The direct query approach ingests data from non-partitioned tables, while the GCS export targets partitioned tables. For further information, read Data Replication Strategies.

Prerequisites  
  • Accessing BigQuery Project consists of one or more datasets with at least one table.
  • You must link an active billing account to your GCS Project because Hevo ingests incremental data in temporary tables.
  • You must have permission from the Data Viewer, Data Editor, and Job User at the dataset level.
  • Grant access to a GCS bucket in the BigQuery location in your dataset. Specific the GSC bucket during pipeline creation so that Hevo can use the GCS Export strategy for data replication.
  • To create a pipeline in Hevo, you must have a Team Collaborator, Team Administrator, or Pipeline Administrator role.
Configure BigQuery as your Source
  1. Select PIPELINES from the Navigation Bar.
  2. Click on +CREATE in the Pipeline List View. 
  3. Select Google BigQuery on the page, and select Source Type. 
  4. Once you have configured your BigQuery Account page, connect your BigQuery warehouse User Account using one of the following ways:
  • Select a previously configured account and click on CONTINUE.
  • Click on + ADD BigQuery ACCOUNT, and execute the following steps for account configuration. 
 BigQuery to SQL Server: Configure your BigQuery Account
 Configure your BigQuery Account
  • Sign in to your account.
  • Click on Allow for Hevo to access your data. 
BigQuery to SQL Server: Allowing Access to Hevo for a User Account
Allowing Access to Hevo for a User Account

For connecting to a Service Account:

  • Select the previously configured account and click on CONTINUE.
  • Insert a Service key Account (JSON File) you created inside the Google Cloud Platform, and click on CONFIGURE BIGQUERY ACCOUNT. 

              BigQuery to SQL Server: Configuring Service Account

On the Configure your BigQuery Source page, specify the following details:

BigQuery to SQL Server: Setting up BigQuery Source Page
Setting up BigQuery Source Page
  • Pipeline Name: generate a unique name for your pipeline within 255 characters.
  • Authorized User or Service Account (Non-editable): This field is pre-filled. It contains your selected email address when connecting to the Google BigQuery Account. 
  • Project ID: Select the Project ID you want to create for this pipeline from the drop-down menu.
  • Dataset ID: Select the datasets that contain tables. You can also select tables from datasets in advanced settings.
  • Advanced Settings: 
  • Include New Tables in Pipeline: 

A- if enabled: Hevo automatically inserts the data from tables In Pipeline after the creation,

B- if disabled: The new tables get listed in Pipeline Detailed View in a Skipped State. You must manually load the tables you want to include from historical data. To learn more about selecting a table, read Selecting Source Objects for Ingestion.

  1. Select TEST AND CONTINUE.

For more information on how to set up BigQuery as the source of your pipeline, refer to the Hevo documentation.

Step 2: Select the SQL Server as Your Destination

Prerequisites
  • Your SQL Server should be running. 
  • Your Server host’s IP address and DNS port should be available.
  • You must grant some necessary privileges to the user on your database. 

You can perform a few optional steps before selecting SQL Server as your destination. Take a look at them:

  1. Setting up SQL Server Instance
  2. Allow Access to SQL Server Instance
  3. Create a Database for Your SQL Server Instance
Create a Database User for Your SQL Server and Grant Privileges
  • Execute the following steps to create and map the login user to the database and connect it to SQL Server:
  1. Login to your SQL Server as a master user with the help of an SQL client tool

For example, sqlcmd -U hevouser

  1. Run the following command to add the user to the database.
USE <database_name>
GO

3. Insert the following command to create a database login user (skip this step if you already have a user account).

CREATE LOGIN [<login_name>] WITH PASSWORD=N'<strong_password>', DEFAULT_DATABASE=[<database_name>]
GO

4. Now, create the database user for your account (you can specify an existing user or the user specified in step 3).

CREATE USER [<username>] FOR LOGIN [<login_name>]
GO
  •  Here’s how you can grant privileges to the database user you created:
  1. Login to your SQL Server as a login user with the help of the SQL client tool.

For example, sqlcmd -U hevouser

2. Execute the following command to access the database you created.

USE <database_name>
GO

3. Insert the following command to grant privileges to the user.

GRANT ALTER, SELECT, INSERT, UPDATE ON SCHEMA::<schema_name> TO <username>
GO
GRANT CREATE TABLE, INSERT, SELECT, UPDATE ON DATABASE::<database_name> to <username>
GO
Configure your SQL Server Connection Settings
  • Select DESTINATIONS from the Navigation Bar.
  • Click on +CREATE in Destination List View.
  • Select SQL Server in your Add Destination Page.
  • Specify the following fields in the SQL Configuration Setting page.
BigQuery to SQL Server: Configure Connection Settings
Configure Connection Settings
  • Destination Name: Generate a unique name for your destination.
  • Database Host: Insert your server’s IP address or DNS. It can be an IP address like 10.123.10.001 or an endpoint like mssql.westeros.inc.
  • Database Port: Specify the port where your SQL Server listens for connections. The default value is 1433.
  • Database User: The user with a non-administrative role. You can add the username you created in Step 2. 
  • Database Password: Insert the password for the database user. 
  • Database Name: Insert the database name where you want to load the data. It can be the database you created earlier. Remember that the login user must have access to the database. 
  • Additional Settings
  • Connecting through SSH Tunnel: Connecting through SSH tunnel gives you extra security for your data. 
  • Sanitize Column/ Table Names: This helps you remove all the alphanumeric characters and extra spaces in the column or table names and replace them with (_) underscore. 
  • Click on TEST CONNECTION. This field is enabled after you specify all the mandatory fields. 
  • Click on SAVE & CONTINUE. 

To know more about setting SQL Server as your destination, you can go through the Hevo documentation.

Method 2: Connecting BigQuery to SQL Server Using ODBC Driver

You can connect BigQuery to SQL Server using Microsoft SQL Server Management Studio. The Server Management Studio allows you to migrate data from BigQuery to SQL instances through a few steps. A tool called ‘Linked Server’ helps you to execute queries stored in the referred table on a non-SQL Server.

Let’s look at the steps to connect BigQuery to SQL using ODBC Driver:

Step 1: Configuring Your SQL Server to Connect with BigQuery

  1. Go to your Management Studio and choose the SQL Server instance you want to work with. 
  2. Go to the Object Explorer pane, select Server Objects in the expand window, and click on Linked Servers. Once you click on the linked server, select the New Linked Server option there. 
  3. Click on Other data source under Server Type. 
  4. Click on Microsoft OLE DB Provider in the Provider list
  5. Enter the name of your data source and input the ODBC driver connection string in the Provider field. 

The Linked Server option will appear under Linked Services in the Object Explorer window.  Once you are through, you can issue queries and access your BigQuery database through the SQL Server.

Step 2: Collection Data from BigQuery

  1. Enable Allow inprocess option
  • Select MSDASQL from the list of Linked Servers
BigQuery to SQL Server: Selecting MSDASQL
Selecting MSDASQL
  • Enable Allow inprocess, form Provider option window.
BigQuery to SQL Server: Enabling Allow Inprocess
Enabling Allow Inprocess
  1. Create a new Linked Server by clicking on Linked Server in Object Explorer.
BigQuery to SQL Server: Generating a New Linked Server
Generating a New Linked Server
  1. Select Microsoft OLE DB Provider for your ODBC Driver in the New Linked Server Pane
BigQuery to SQL Server: Selecting a Provider
Selecting a Provider
  1. Input the name of the Linked Server and provide the product name and data source to indicate the DSN system you created earlier.
BigQuery to SQL Server: Providing Linked Server Name and data source
Providing Linked Server Name and data source
  1. The tables in BigQuery are ready to fetch your data. You just need to click on New Query to query your linked server. 
BigQuery to SQL Server: Select New Query Option
Select New Query Option
  1. Insert the query in the Editor pane and select Execute to run it. The ODBC Driver retrieves all the content from the BigQuery tables.  
BigQuery to SQL Server: BigQuery Tables data is fetched
BigQuery Tables data is fetched

Limitations of Connecting BigQuery to SQL Server Using ODBC Driver

  • The connections made through ODBC drivers are sensitive to network latency, resulting in slow data retrieval and affecting the system’s performance. 
  • ODBC Drivers sometimes don’t fully support the advanced SQL functionalities, making the data migration process lengthy and tedious.

Method 3: Transferring Data from BigQuery to MsSQL Server Using an Operator

The BigQuery operator helps you easily transfer data from BigQuery to MsSQL Server.

Prerequisites
  • Either Create or Select a Cloud Platform Project from Cloud Console. 
  • Enable billing for the Project that you created using Google Cloud Documentation.
  • Enable an API as per the Cloud Console Documentation.
  • Install the API library using pip 

pip install ‘apache-airflow[google]’

  • Set up the Google Cloud Connection

Step 1: Using BigQuery Operator to Copy Data from BigQuery Table 

  • To define values separately, you should use Jinja templating with field names like table_name, table_id, dataset_id, etc. 

Step 2: Transferring Data from BigQuery Table to MsSQL Server 

Use the following command to get the data from BigQuery. Here, BigQueryToMsSqlOperator helps you copy the data from the source to the destination.

BigQuery_to_mssql = BigQueryToMsSqlOperator(
    task_id="BigQuery_to_mssql",
    source_project_dataset_table=f"{PROJECT_ID}.{DATASET_NAME}.{TABLE}",
    target_table_name=destination_table,
    replace=False,
)

Limitations of Transferring Data from BigQuery to MySQL Server Using an Operator

  • You may have to use additional tools for scripting and integration, which makes the process lengthy and complex.
  • To use the BigQuery Operator, you must be familiar with its functionality and the platform it supports. 
  • You must understand the Google Cloud Platform and perform specific steps before proceeding with the scripting part.

Use Cases of BigQuery to MySQL Server Migration

  • SQL Server provides greater control over data optimization and configuration compared to BigQuery, which has limited options for service customization. 
  • Unlike BigQuery, SQL provides better data replication methods such as snapshot, transactional, and merge replication. This makes SQL Server more flexible in synchronizing with different data environments, ensuring data consistency.

Learn more about How to Connect & Load Data from BigQuery to MySQL.

Final Word

BigQuery and SQL Server are high-capacity storage platforms that help manage data for profound interpretation and analysis. If you want to organize your BigQuery data in an SQL Server, Hevo is a good choice among other ELT platforms for the task. You can also export data from BigQuery using ODBC Driver and BigQuery Operator. 

FAQs (Frequently Asked Questions)

Q1. How can you extract data from BigQuery and send it to SQL Server without using SSIS?

You can use a third-party tool such as Hevo to integrate your data from BigQuery to a SQL Server. This tool provides you with different connectors and APIs, such as REST API and Webhooks, which help in seamless data transformation. You can also use the pandas package in Python to read data from BigQuery and send it to SQL.

Q2. What are some connections used by BigQuery other than MySQL or PostgreSQL?

You can create data integration pipelines, which provide a custom solution for streaming data and supporting real-time data replication. You may use ELT tools to export data from a BigQuery file compatible with CSV or JSON format. 

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable Hevo pricing that will help you choose the right plan for your business needs.

Visit our Website to Explore Hevo
Saloni Agarwal
Technical Content Writer, Hevo Data

Saloni is a technical content writer, who enjoys writing articles about ML, AI and cloud-based storage platforms. She has experience writing about data integration, and different modern technologies related to this.

All your customer data in one place.

Get Started with Hevo