Understanding SQL Server and BigQuery
Methods to move data from Microsoft SQL Server to Google BigQuery
There are two popular methods to perform SQL Server to BigQuery data replication.
Method 1: An easy to use Data Pipeline Platform like Hevo Data
Method 2: Write custom ETL scripts to move data
Migrating Data from Microsoft SQL Server to Google BigQuery
Using a Third Party ETL (Extract Transform Load) tool like Hevo
Write Custom ETL code to move data from Microsoft SQL Server to BigQuery.
Migrating data from Microsoft SQL Server to Google BigQuery Using Custom ETL code:
Export the data from SQL Server using SQL Server Management Studio (SSMS)
Upload to Google Cloud Storage
Upload to BigQuery from Google Cloud Storage (GCS)
Update the target table in BigQuery
Step 1: Export the data from SQL Server using SQL Server Management Studio (SSMS)
Install SSMS if you don’t have it on your local machine. You can install it here.
Open SSMS and connect to a Structured Query Language (SQL) instance. From the object explorer window, select a database and right-click on the Tasks sub-menu and choose the Export data option.
The welcome page of the Server Import and Export Wizard will be opened. Click the Next icon to proceed to export the required data.
You will see a window to choose a data source. Select your preferred data source.
In the Server name dropdown list, select a SQL Server instance.
In the Authentication section select authentication for the data source connection. Next, from the Database drop-down box, select a database from which data will be copied. Once you have filled the drop-down list select ‘Next’.
The next window is the choose the destination window. You will need to specify the location from which the data will be copied from in the SQL server. Under the destination, drop-down box select the Flat File destination item.
In the File name box, establish the CSV file where the data from the SQL database will be exported to and select the next button.
The next window you will see is the Specify Table Copy or Query window, choose the Copy data from one or more tables or views to get all the data from the table.
Next, you’d see a Configure Flat File Destination window, select the table from the source table to export the data to the CSV file you specified earlier.
At this point your file would have been exported, to view the exported file click on preview. To have a sneak peek of the data you just exported.
Complete the exportation process by hitting ‘Next’. The save and run package window will pop up, click on ‘Next’.
The Complete Wizard window will appear next, it will give you an overview of all the choices you made during the exporting process. To complete the exportation process, hit on ‘Finish’.
The exported CSV file will be found in Local Drive, where you specified for it to be exported to.
Step 2: Upload to Google Cloud Storage
Method 1: Using Gsutil
gsutil mb -l us-east1 gs://my-new-bucket/
gsutil cp export.csv gs://my-new-bucket/path/to/folder/
Method 2: Using Web console
The web console is another alternative you can use to upload your CSV file unto the GCS from your local machine. The steps to use the web console are outlined below.
First, you will have to log in to your GCP account. Toggle on the hamburger menu which displays a drop-down menu. Select Storage and click on the browser on the left tab.
In order to store the file that you would upload from your local machine, create a new bucket. Make sure the name chosen for the browser is globally unique.
The bucket you just created will appear on the window, click on it and select upload files. This action will direct you to your local drive where you will need to choose the CSV file you want to upload to GCS.
As soon as you start uploading, a progress bar is shown. The bar disappears once the process has been completed. You will be able to find your file in the bucket.
Step 3: Upload data to BigQuery From GCS
BigQuery is where the data analysis you need will be carried out. Hence you need to upload your data from GCS to BigQuery. There are various methods that you can use to upload your files from GCS to BigQuery. Let’s discuss 2 methods here:
Method 1: Using the Web Console UI
The first point of call when using the Web UI method is to select BigQuery under the hamburger menu in the GCP home page.
Select the create a new dataset icon and fill the corresponding drop-down menu.
Create a new table under the data set you just created to store your CSV file
In the create table page –> in the source data section: Select GCS to browse your bucket and select the CSV file you uploaded to GCS– Make sure your File Format is set to CSV
– Fill the destination tab and the destination table
– Under schema, click on auto detect schema
– Select create table
After creating the table, click on the destination table name you created to view your exported data file.
Using Command Line Interface
The Activate Cloud Shell icon shown below will take you to the command-line interface command-line syntax is shown below
bq --location=[LOCATION] load --source_format=[FORMAT] [DATASET].[TABLE]
[PATH_TO_SOURCE] [SCHEMA] [LOCATION] is an optional parameter and represents your Location. [FORMAT] is to be set to CSV. [DATASET] represents an existing dataset. [TABLE] is the table name into which you're loading data. [PATH_TO_SOURCE] is a fully-qualified Cloud Storage URI. [SCHEMA] is a valid schema. The schema must be a local JSON file or inline. Note: You can use the autodetect flag to specify your schema.
bq --location=US load --source_format=CSV your_dataset.your_table gs://my-new-bucket/your_data.csv ./your_schema.json
bq --location=US load --autodetect --replace --source_format=CSV your_dataset.your_table gs://bucket_name/path/to/file/your_file_name.csv
bq --location=US load --autodetect --noreplace --source_format=CSV your_dataset.your_table gs://bucket_name/path/to/file/your_file_name.csv ./schema_file.json
bq --location=asia-northeast1 load --noreplace --schema_update_option=ALLOW_FIELD_ADDITION --source_format=CSV your_dataset.your_table gs://mybucket/your_data.csv ./your_schema.json
Step 4: Update the Target Table in BigQuery
GCS acts a staging area for BigQuery, so when you are using Command-Line to upload to BigQuery, your data will be stored in an intermediate table. The data in the intermediate table will need to be updated for the effect to be shown in the target table. There are two ways to update the target table in BigQuery. I will be explaining both of them below.
UPDATE final_table t SET t.value = s.value FROM intermediate_data_table s WHERE t.id = s.id;INSERT final_table (id, value) SELECT id, value FROM intermediate_data_table WHERE NOT id IN (SELECT id FROM final_table);
2. Delete all the rows from the final table which are in the intermediate table
DELETE final_table f WHERE f.id IN (SELECT id from intermediate_data_table); INSERT data_setname.final_table(id, value) SELECT id, value FROM data_set_name.intermediate_data_table;
Limitations and Challenges with writing Custom Code to move data from Microsoft SQL Server to Google BigQuery
Writing custom code is only ideal if you’re looking to move your data once from Microsoft SQL Server to BigQuery
Custom ETL code does not scale well with stream and real-time data. You will have to write additional code to update your data. This is far from ideal
When there’s a need to transform or encrypt your data, custom ETL code fails as it will require you to add additional processes to your pipeline
- Maintaining and managing a running data pipeline such as this will need you to invest heavily on valuable engineering resources
AN EASIER WAY TO MOVE DATA FROM SQL Server TO BIGQUERY:
Using a fully managed, easy to use Data Pipeline platform like Hevo, you can load your data from SQL Server to BigQuery in a matter of minutes. You can achieve this on a no-code-required, point and click environment. Here are the steps to replicate SQL Server to BigQuery using Hevo:
- Connect to your SQL Server
- Select the replication mode: (i) Full dump and load (ii) Incremental load for append-only data (iii) Incremental load for mutable data
- Configure your BigQuery Warehouse and move data
With Hevo, you can achieve simple and efficient Data Replication from Microsoft SQL Server to BigQuery. Hevo can help you move data from not just SQL Server but 100s of additional data sources.
Sign up for a 14-Day Free Trial with Hevo and experience a seamless, hassle-free data migration experience from SQL Server to BigQuery.