You can connect Microsoft SQL Server to BigQuery using automated pipelines, managed ETL/ELT services, or manual exports, depending on your migration needs and technical resources.
- Hevo provides a simple, reliable, and transparent way to build SQL Server to BigQuery pipelines with pre-built connectors and no-code setup.
- Google Cloud Data Fusion balances flexibility and automation for teams wanting ETL/ELT customization without building pipelines from scratch.
- Manual ETL/ELT processes offer complete control but require ongoing maintenance, and they are best for one-time migrations.
- Automated solutions like Hevo provide easy-to-build, fault-tolerant pipelines with automatic schema evolution and quick access to analytics-ready data, minimizing operational overhead.
Microsoft SQL Server is reliable, but as analytics demands grow, performance can suffer. Reports slow down, dashboards lag, and complex queries begin to compete with operational workloads.
That’s when teams move analytics to Google BigQuery. It offers faster queries, massive scalability, and advanced analytics without straining core systems. Yet migrating data from SQL Server to BigQuery is rarely straightforward. Large tables, schema changes, and incremental updates can quickly complicate pipelines.
Without careful planning around data modeling, freshness, and maintenance, integrations become fragile. When done right, BigQuery delivers serverless scalability, near-instant queries, and analytics at any scale.
This guide compares three proven ways to connect SQL Server to BigQuery, helping teams choose the right approach based on data volume, latency needs, and engineering effort.
Table of Contents
Understanding SQL Server to BigQuery Integration
Before choosing a migration approach, it’s important to understand what SQL Server and BigQuery are built for and why teams commonly integrate them.
What is SQL Server?
SQL Server is a relational database management system (RDBMS) developed by Microsoft. It is designed to store, retrieve, and manage data in a structured format using SQL (Structured Query Language). SQL Server provides a robust platform for data management and analytics, with features supporting high performance, scalability, and security. It is ideal for both small-scale applications and large enterprise environments.
Key features of SQL Server
- T-SQL: Powerful language for running complex queries and creating stored procedures.
- Scalability: Supports huge databases and multiple concurrent users.
- High availability: Features of SQL Server High Availability include Always On and Failover clustering.
- Security: Tight security through solid encryption, auditing, and row-level security.
- Integration: Integrates well with other Microsoft services and third-party tools.
- Data tools: Robust tools for ETL/ELT, reporting, and data analysis.
- Cloud integration: Comparatively much easier to integrate with Azure services.
What is BigQuery?
BigQuery is Google’s cloud enterprise data warehouse that primarily serves business agility in running complex SQL queries and performing analysis on huge datasets efficiently. It is based on Google technology called Dremel, using columnar storage and tree architecture to support high-speed scanning of data for querying efficiency.
Key features of BigQuery
- Serverless architecture: BigQuery manages servers and storage in the background, so that users don’t have to.
- High scalability: It scales seamlessly to handle petabytes of data.
- SQL compatibility: It supports ANSI SQL, which is useful for those who already know SQL and want to write and run queries. This allows users to combine various BI tools for data visualisation.
- Machine learning: BigQuery ML enables users to train and run machine learning models in BigQuery using only SQL syntax.
3 Methods to Migrate Data from Microsoft SQL Server to BigQuery
Migrating data from Microsoft SQL Server to Google BigQuery is not just about moving tables; it is about choosing the right balance between speed, control, scalability, and maintenance effort. Traditional approaches rely on manual exports and custom scripts, while modern platforms automate ingestion and support near real-time analytics.
Depending on your data volume, sync frequency, and engineering bandwidth, there are three primary methods to migrate data from SQL Server to BigQuery:
- Method 1: Using Hevo Data (No-code, automated ETL/ELT)
- Method 2: Using JDBC drivers with Google Cloud Data Fusion
- Method 3: Manual ETL/ELT using SSMS, GCS, and BigQuery
Each method differs in setup complexity, maintenance effort, and scalability. The table below compares them side by side.
Invalid or missing JSON data for Tabular Comparison.
Method 1: Move data from Microsoft SQL Server to BigQuery with Hevo Data
Hevo is a no-code data pipeline platform that automates the entire ETL/ELT process from SQL Server to BigQuery. This method is ideal for businesses seeking real-time data synchronisation without coding requirements or infrastructure management. Beyond SQL Server, Hevo supports 150+ data sources, including MongoDB, to consolidate data from multiple systems into BigQuery.
Prerequisites before you start the process:
Before starting the SQL Server to BigQuery integration, ensure the following:
- Active SQL Server instance (not running on localhost)
- SQL Server version 12 or above, based on the ingestion mode
- Required database privileges:
- SELECT access
- VIEW CHANGE TRACKING and ALTER DATABASE (for Change Tracking modes)
- Hevo account access with one of these roles:
- Pipeline Administrator
- Team Collaborator
- Team Administrator
Step 1: Enable TCP/IP protocol
Configure your SQL Server instance with TCP port value 1433 to ensure Hevo can connect to your SQL Server.
Follow these steps to enable TCP/IP protocol:
1. Enable the TCP/IP port
- Open the SQL Server Configuration Manager.
- In the left navigation pane, expand SQL Server Network Configuration.
- Click Protocols for your MS SQL Server Instance Name (default name is MSSQLSERVER).
- In the right pane, right-click the TCP/IP Protocol Name.
- Select Enable in the Status field.
- Click OK to acknowledge the dialogue box.
2. Verify the TCP/IP port
- Right-click the TCP/IP Protocol Name and choose Properties.
- Click the IP Addresses tab on the TCP/IP Properties window.
- In the IPAII section, ensure the TCP Port value is 1433.
- Click Apply and OK to exit the window.
3. Restart the MS SQL Server instance
- Click SQL Server Services on the left navigation pane.
- In the right pane, right-click your MS SQL Server Instance Name.
- Select Restart.
Step 2: Enable change tracking
This step applies only to pipelines with Change Tracking as their ingestion mode. Connect your SQL Server database to your SQL Client tool and run these commands:
1. Enable change tracking at the database level:
sql
ALTER DATABASE <database_name> SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 3 DAYS, AUTO_CLEANUP = ON)
2. Enable change tracking at the table level:
sql
ALTER TABLE <schema_name>.<table> ENABLE CHANGE_TRACKING
Repeat this step for each table you want to replicate using Change Tracking.
Step 3: Create a database user and grant privileges
Grant privileges to the user
The database user specified in the Hevo Pipeline must have the following global privileges:
- SELECT
- VIEW CHANGE TRACKING (if ingestion mode is Change Tracking)
Connect your MS SQL Server in your SQL Client tool and run the following commands:
Grant SELECT privilege at the database level:
GRANT SELECT ON DATABASE::<database> TO <username>;
Grant SELECT privilege at the schema level:
GRANT SELECT ON SCHEMA::<schema_name> TO <username>;
If the ingestion mode is Change Tracking, grant the VIEW CHANGE TRACKING privilege:
For schema level:
GRANT VIEW CHANGE TRACKING ON SCHEMA::<schema_name> TO <username>;
For table level:
GRANT VIEW CHANGE TRACKING ON OBJECT::<schema_name>.<table_name> TO <username>;\
Step 4: Configure SQL Server as your source
- Go to the Navigation Bar and click PIPELINES.
- From the Pipelines List View, click + CREATE PIPELINE.
- Select SQL Server from the Source Type page.
- The Configure your SQL Server Source page will open.
Provide the following details while setting up the SQL Server source:
- Pipeline Name: Unique name (up to 255 characters).
- Database Host: SQL Server IP address or DNS.
- Database Port: Connection port (default: 1433).
- Database User & Password: User credentials with read access.
- Ingestion Mode: Choose Change Tracking, Table, or Custom SQL.
- Database Name: Source database to ingest from.
- Schema Name: Tables schema (default: dbo).
- SSH Connection (Optional): Enable for secure access via SSH tunnel.
After specifying all necessary fields:
- Click TEST CONNECTION (this button becomes enabled after all fields are filled)
- Click TEST & CONTINUE to proceed with setting up the Destination
Step 5: Configure BigQuery as your destination
- From your Hevo Data account, access Destinations in the Navigation Bar.
- Click Destinations List View and choose + Create Destination.
- From the Add Destination page, select BigQuery.
- The Configure your BigQuery Destination page will open.
Specify the following details when setting up BigQuery as your destination:
- Destination Name: Unique name (up to 255 characters).
- Account: Authentication account for BigQuery access.
- Project ID: BigQuery project to load data into.
- Dataset: Auto-create with Hevo or select an existing one.
- GCS Bucket: Auto-create or choose an existing bucket under the project.
- Click SAVE & CONTINUE, then click TEST CONNECTION.
- With this, you have successfully set up Microsoft SQL Server to BigQuery integration using Hevo Data.
Facing challenges migrating your customer and product data from SQL Server to BigQuery? Hevo’s no-code intuitive platform makes migration seamless. With Hevo, you can:
- Automate data extraction: Effortlessly pull data from SQL Server and 150+ other sources.
- Transform data effortlessly: Use Hevo’s drag-and-drop feature to transform data with just a few clicks.
- Load data seamlessly: Quickly load your transformed data into BigQuery and other destinations.
Start your 14-day free trial and join a growing community of 2,000+ data professionals who rely on Hevo for seamless and efficient migrations.
Get Started with Hevo for FreeMethod 2: Move data from SQL Server to BigQuery with JDBC Driver and Cloud Data Fusion Web Interface
Google Cloud Data Fusion strikes a balance between flexibility and automation by offering more customization than no-code tools while requiring minimal coding. It is ideal for teams that want control without building pipelines from scratch.
Step 1: Upload JDBC Drivers to Cloud Data Fusion
In the Cloud Data Fusion Studio:
- Navigate to the Hub and access Drivers or Menu.
- From the Namespace Admin, access Drivers.
- Click Upload.
- Select the JAR files for SQL Server and BigQuery JDBC drivers.
- Enter the driver name, class name, and version.
Step 2: Create an ETL/ELT (or replication) job
- Access the Replication page in the Cloud Data Fusion UI.
- Click Create replication job.
- Provide a Job Name and click Next.
Step 2.1: Configure the source (SQL Server)
- Select Microsoft SQL Server as the Source.
- Add the Host, Port, Database Name, username, and password.
- Choose a JDBC plugin name and add the necessary credentials.
- If needed, configure SSL settings.
- Click Next.
Step 2.2: Configure the target (BigQuery)
- Select BigQuery as the target.
- Add your Project ID, Dataset, and Table Name.
- If you don’t use the default account, provide a Service Account Key.
- Adjust advanced options if needed.
- Click Next.
Step 2.3: Select tables and events
- Choose the tables from SQL Server that you want to replicate.
- Select the events you want to replicate: Insert, Update, and Delete.
- Resolve any schema differences when Cloud Data Fusion displays mappings.
- Click Next.
Step 2.4: Review and deploy
- Review the job configuration before deploying.
- Click Deploy Replication Job to create and save the job.
Step 3: Run and monitor the job
- Access the Replication page.
- Click your job name.
- Navigate to Monitoring to view job status, logs, and errors.
- Troubleshoot or reconfigure the connection if needed, and redeploy the job.
Method 3: Move data from SQL Server to BigQuery using a manual ETL/ELT process
The manual ETL/ELT process gives you complete control over the data migration but requires technical expertise and ongoing maintenance. This method involves exporting data from SQL Server, uploading it to Google Cloud Storage, and then loading it into BigQuery.
The manual ETL/ELT process consists of four main steps:
Step 3.1: Export data from SQL Server using SQL Server Management Studio (SSMS)
Step 3.2: Upload the exported data to Google Cloud Storage
Step 3.3: Load data into BigQuery from Google Cloud Storage
Step 3.4: Update the target table in BigQuery
Step 3.1: Export data from SQL Server using SSMS
SQL Server Management Studio (SSMS) is a free tool built by Microsoft that provides a comprehensive environment for managing SQL infrastructure. You can use SSMS to extract data in CSV format.
Export Process:
- Install SSMS if you don’t have it on your local machine.
- Open SSMS and connect to a SQL instance.
- Select a database from the object explorer window.
- Right-click the Tasks sub-menu and choose the Export data option.
- The SQL Server Import and Export Wizard welcome page will open.
- Click Next to proceed.
Choose a Data Source:
- In the data source window, select your preferred data source.
- In the Server name dropdown list, select a SQL Server instance.
- In the Authentication section, choose the authentication method for the data source connection.
- From the Database dropdown list, select the database from which data will be copied
- Click Next.
Choose the Destination:
- In the Choose the Destination window, under the Destination dropdown list, select the Flat File destination option.
- In the File name box, specify the CSV file location where the data will be exported.
- Click Next.
Specify Table Copy or Query:
- In the Specify Table Copy or Query window, choose to copy data from one or more tables or views.
- This will get all the data from the selected table(s).
- Click Next.
Configure Flat File Destination:
- Select the table from the source to export data to the CSV file specified earlier.
- Click Preview to view the exported file.
- Click Next to continue.
Complete the Export:
- The Save and Run package window will appear.
- Click Next.
- The Complete Wizard window will show an overview of your choices.
- Click Finish to complete the exportation process.
- The exported CSV file will be found in the Local Drive location you specified.
Step 3.2: Upload to Google Cloud storage
After exporting data to your local machine, transfer the CSV file to Google Cloud Storage (GCS). You can use either of these methods:
Method A: Using gsutil
gsutil is a GCP tool that uses Python and gives you access to GCS from the command line. To initiate gsutil, refer to the quickstart documentation. gsutil provides a unique way to upload a file to GCS from your local machine. To create a bucket in which you copy your file:
- Create a new bucket:
bashgsutil mb gs://my-new-bucket
Your bucket name must be globally unique. If successful, the command returns:
Creating gs://my-new-bucket/...
- Copy your file to GCS:
bashgsutil cp export.csv gs://my-new-bucket/destination/export.csv
In this command:
- export.csv is the file you want to copy
- gs://my-new-bucket is the GCS bucket you created
- destination/export.csv specifies the destination path and filename in the GCS bucket
Method B: Using Web Console
- Log in to your GCP account.
- Toggle the hamburger menu to display a drop-down menu.
- Select Storage and click Browser on the left tab.
- Create a new bucket (ensure the name is globally unique).
- Click on the bucket and select Upload Files.
- Choose the CSV file from your local drive to upload to GCP.
- A progress bar will display during upload.
- Once complete, your file will appear in the bucket
Step 3.3: Upload data to BigQuery from GCS
BigQuery is where your data analysis will be carried out. You need to upload your data from GCS to BigQuery.
Using Web UI:
- Open BigQuery from the GCP hamburger menu.
- Click Create a new dataset and complete the required fields.
- Create a new table within the dataset.
- Choose GCS as the source and select the CSV file.
- Set the file format to CSV.
- Configure the destination table.
- Enable auto-detect schema.
- Click Create table.
- Open the table to verify the loaded data.
Using Command Line Interface:
You can use the bq command-line tool with the auto-detect feature:
bashbq load --autodetect --source_format=CSV your_dataset.your_table gs://your_bucket/your_file.csv
To specify a custom schema:
bashbq load --autodetect --source_format=CSV --schema=schema.json your_dataset.your_table gs://your_bucket/your_file.csv
In this example, schema.json contains the schema definition for your CSV file. You can customise the schema by modifying the schema.json file to match your data structure.
Writing to an Existing Table:
There are three ways to write to an existing table in BigQuery:
1. Overwrite the data:
bashbq load --replace --source_format=CSV your_dataset.your_table gs://your_bucket/your_file.csv
The –replace flag ensures that existing data in the table is replaced with new data from the CSV file.
2. Append to the table:
bashbq load --noreplace --source_format=CSV your_dataset.your_table gs://your_bucket/your_file.csv
The –noreplace flag ensures that new data from the CSV file is appended to existing data in the table.
3. Add a new field to the target table:
bashbq update your_dataset.your_table --schema schema.json
The schema.json file should contain the updated schema definition with the new field and its corresponding data type.
Step 3.4: Update the target table in BigQuery
Google Cloud Storage acts as a staging area for BigQuery. When using the command line to upload data, the data is first stored in an intermediate table. For changes to reflect in the target table, you must update it using one of these methods:
Method A: Update existing rows and insert new rows from the intermediate table:
sql
UPDATE final_table t SET t.value = s.value
FROM intermediate_data_table s
WHERE t.id = s.id;
INSERT INTO final_table (id, value)
SELECT id, value
FROM intermediate_data_table
WHERE id NOT IN (SELECT id FROM final_table);
Method B: Delete all rows from the final table that are in the intermediate table:
sql
DELETE FROM final_table
WHERE id IN (SELECT id FROM intermediate_data_table);
Replace final_table and intermediate_data_table with your actual table names.
This completes the SQL Server to BigQuery connection using the manual ETL/ELT process.
Hevo provides reliable, no-code, automated pipelines with 150+ connectors. Skip the complexity of manual ETL/ELT processes and focus on deriving insights from your data.
Start your free trial today and experience seamless data migration!
Limitations of the manual ETL/ELT process
While manual ETL/ELT provides full control, it introduces several operational and scalability challenges:
Key Limitations:
- Best for one-time migrations: Not suitable for continuous or recurring data syncs
- Poor scalability: Struggles with real-time and streaming data workloads
- Limited transformations: Complex data transformations or encryption require extra pipeline logic
- High maintenance effort: Requires ongoing engineering time for monitoring and fixes
- Data consistency risks: Source data changes can affect query accuracy in BigQuery
- Location constraints: BigQuery ETL/ELT datasets must match the Cloud Storage bucket region
CSV-Specific Constraints:
- No support for nested or repeated data structures
- Cannot mix compressed and uncompressed files in a single load job
- The maximum gzip CSV file size is 4 GB
Summary
Manual ETL/ELT may look simple initially, but it is error-prone, resource-intensive, and difficult to scale. This makes it unsuitable for long-term analytics pipelines.
What Are the Use Cases for Migrating from SQL Server to BigQuery?
Migrating from Microsoft SQL Server to Google BigQuery is typically driven by the need for greater scalability, faster performance, and advanced analytics, areas where traditional OLTP systems often fall short. Below are the most common real-world use cases.
| Use Case | Description | Key Points/Examples |
| Large-scale analytics & BI | SQL Server slows down with complex queries; BigQuery handles analytics at scale. | Enterprise dashboards, exec reporting, high-concurrency BI; benefits from BigQuery’s extensive data processing |
| Cost-effective warehousing | BigQuery’s serverless model lowers costs compared to SQL Server. | No server management, auto-scaling, predictable spend, strong data security capabilities |
| Near real-time analytics | CDC pipelines let BigQuery report operational data quickly. | Sales tracking, operational monitoring, live metrics; centralize your data for a complete picture of your operations |
| Advanced analytics & ML | Integrates with BigQuery ML & Vertex AI for predictive insights. | Forecasting, customer segmentation, anomaly detection |
| Multi-source analytics | Consolidates SQL Server with SaaS, NoSQL, and event streams. | CRM, app logs, IoT/event data; simplifies cross-source analysis |
| Global & high-concurrency reporting | Supports thousands of simultaneous queries; ideal for distributed teams. | Multi-region teams, self-service BI, data democratisation |
Best Practices for SQL Server BigQuery Integration
Here are the best practices for building a reliable, scalable, and cost-efficient SQL Server to BigQuery integration, presented for clarity and easy review.
| Best Practice | Key Action |
| Plan Your Data Model | Align the schema between SQL Server and BigQuery before migration |
| Optimize Data Types | Match data types to prevent loss or errors |
| Use Efficient Transfer Methods | Leverage bulk transfers via GCS or automation tools |
| Automate Data Transfers | Set up automated pipelines for regular syncing |
| Monitor Performance | Optimize queries using partitioning and clustering |
| Ensure Data Security | Apply encryption (in transit/at rest) and access controls |
| Test and Validate | Verify data accuracy post-migration |
From SQL Server Data to Analytics-Ready BigQuery
In this blog, we covered three practical ways to migrate and replicate data from SQL Server to BigQuery: an automated platform like Hevo, a managed option using Cloud Data Fusion, and a manual ETL/ELT approach. While each method works, they vary in scalability, maintenance effort, and reliability for ongoing replication.
Manual pipelines are suitable for small or one-time migrations. However, as data volumes grow, managing scripts, schema changes, and failures becomes harder to sustain. Automated platforms simplify replication by handling change tracking, schema evolution, and monitoring by default.
This is where Hevo Data stands out. As a fully managed, no-code ELT platform, Hevo helps teams move data from SQL Server to BigQuery quickly and reliably, without engineering overhead. With built-in scalability, fault-tolerant pipelines, and full pipeline visibility, it keeps data accurate as it scales.
Once your data is in BigQuery, features like partitioning, clustering, and materialized views enable fast, cost-efficient analytics and timely insights.
If you want a reliable SQL Server to BigQuery pipeline without managing scripts or infrastructure, start Hevo’s 14-day free trial. No credit card required.
FAQs
1. Can you connect SQL Server to BigQuery?
Yes, you can connect SQL Server to BigQuery and migrate data between them.
2. How to migrate data from SQL Server to BigQuery?
a) Using Google Cloud Dataflow with Apache Beam
b) Using Google Cloud Storage (GCS) as an Intermediary
c) Using Data Migration Tools like Hevo
3. Can I use SQL in BigQuery?
Yes, you can use SQL in BigQuery.