Do you want to analyze the Microsoft SQL Server data in Databricks? Is it true that you are finding it challenging to set up the SQL Server Databricks Integration? Assuming indeed, you’ve arrived at the correct spot! This article will answer all your questions and diminish the strain of discovering a really efficient arrangement. 

Upon a complete walkthrough of this article, you will gain a decent understanding of two of the most efficient methods that can be leveraged for Databricks Connect to SQL Server. Read along to learn more about the steps required for setting up Databricks Connect to SQL Server.

What is Databricks?

Databricks is an integrated data analytics platform developed to facilitate working with massive datasets and machine learning. Based on Apache Spark, it creates a seamless collaboration environment for data engineers, data scientists, and analysts.

Key Features of Databricks

  • Unified Data Analytics Platform: Combines data engineering, data science, and analytics in one platform.
  • Integrated with Apache Spark: Provides high-performance data processing using Apache Spark.
  • Collaborative Notebooks: Interactive notebooks for data exploration and collaboration.
  • Delta Lake for Reliable Data Lakes: Ensures data reliability and quality with ACID transactions.
  • Machine Learning Capabilities: Supports the full machine learning lifecycle from model development to deployment.
Simplify SQL Server to Databricks Migration with Hevo

Facing challenges migrating your customer and product data from SQL Server to Databricks? Migrating your data can become seamless with Hevo’s no-code intuitive platform. With Hevo, you can:

  1. Automate Data Extraction: Effortlessly pull data from SQL Server(and other 60+ free sources).
  2. Transform Data effortlessly: Use Hevo’s drag-and-drop feature to transform data with just a few clicks.
  3. Seamless Data Loading: Quickly load your transformed data into your desired destinations, such as Databricks.

Try Hevo and join a growing community of 2000+ data professionals who rely on us for seamless and efficient migrations.

Get Started with Hevo for Free

What is SQL Server?

SQL Server is a relational database management system (RDBMS) developed by Microsoft. It is designed to store, manage, and retrieve data as requested by other software applications. SQL Server is built on the SQL (Structured Query Language) standard, which is used to query and manage relational databases.

Key Features of SQL Server:

  • Scalability: Supports huge databases and multiple concurrent users.
  • High Availability: Features include Always On and Failover clustering.
  • Security: Tight security through solid encryption, auditing, row-level security.
  • Integration: Integrates very well with other Microsoft services and Third-Party Tools
  • Data Tools: In-Depth tools for ETL, reporting, data analysis
  • Cloud Integration: Comparatively much easier to integrate with Azure services
  • TSQL: Robust Transact-SQL in complex queries and stored procedures.

Using Custom Code to Connect Databricks to SQL Server

Follow the steps below to set up Databricks Connect to SQL Server by writing custom ETL Scripts. This will be an easy six-step process that begins with creating an SQL Server Database on Azure. In case you already have a SQL Server Database deployed either locally or on other Cloud Platforms such as Google Cloud, you can directly jump to Step 4 to connect your database.

Note: Here, we are using a Databricks set up deployed on Azure for tutorial purposes. However, you can apply the same procedure for connecting an SQL Server Database with Databricks deployed on other Cloud’s such as AWS and GCP. The process and drivers involved remain universal.

Connect MS SQL Server to Databricks
Connect Google Analytics to Databricks
Connect Salesforce to Databricks

Step 1: Create a New SQL Database

  • Login to the Microsoft Azure portal using the appropriate credentials.
  • You can either click directly on the Create a Resource button or navigate to the left side vertical menu bar and select SQL Databases to get redirected to the Create SQL Database Screen.
Databricks Connect to SQL Server - Create a Resource
Image Source
  • Fill in the required fields like Database Name and choose your desired server. Once you are done doing that, click on the Review+Create button to create your SQL Database on Azure.
Databricks Connect to SQL Server - Creating SQL Database
Image Source

Step 2: Upload the Desired File to Databricks Cluster

  • Navigate to the left side menu bar on your Azure Databricks Portal and click on the Data Icon.
  • Now, click on the Add Data button.
Databricks Connect to SQL Server - Adding Data
Image Source
  • Browse the file that you wish to upload to the Azure Databrick Cluster and then click on the Select button to upload the file.
Uploading File for Databricks Connect to SQL Server
Image Source

Step 3: Load the Data

  • Click on the Workspace icon present on the left side of the home page.
  • Click on the Create option and then select Notebook to write your own custom code.
Databricks Connect to SQL Server 
- Workspace
Image Source
  • Now, provide a unique name to the Notebook and select Scala as the preferred language in which you wish to write the custom code.
  • By default, the Cluster’s name is pre-populated if you are working with a single cluster. In case you have created multiple clusters, you can select the desired cluster from the drop-down menu.
  • Finally, click on the Create button.
Databricks Connect to SQL Server - Creating new Notebook
Image Source

 Step 4: Create the JDBC URL and Properties 

  • Firstly, you need to create a JDBC URL that will contain information associated with either your Local SQL Server deployment or the SQL Database on Azure or any other Cloud platform. You further need to add other details such as Port Number, User, and Password. You can refer to the following piece of code to do so:
val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase}"
You can replace the Hostname, Port and Database parameters as per your deployment. Once you’ve added your DB specific details, your URL string will look like this:

Example: val url = "jdbc:sqlserver://azsqlhevoserver.database.windows.net:1433;database=azsqlhevodb;user=Rakesh;password=*******"
  • Now it’s time to create the properties or functions to link the parameters.
import java.util.Properties
val myproperties = new Properties()
myproperties.put("user", "Rakesh")
myproperties.put("password", "******")

Step 5: Check the Connectivity to the SQL Server Database

The code given below will help you in checking the connectivity to the SQL Server database:

val driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
myproperties.setProperty("Driver", driverClass)

Step 6: Read & Display the Data

  • To read the content of the file that you uploaded in the previous step, you can create a Data Frame. The columns in the first row of the file will be used as the Data Frame’s column names if header = true is set. You can specify Spark to automatically infer the schema of each column by setting inferSchema = true.
val mydf = spark.read.format("csv")
    .option("header","true")
    .option("inferSchema", "true")
          .load("File")
  • Lastly, to display the data, you can simply use the display function:
  display(mydf)

Once you follow all the above steps in the correct sequence, you will be able to build Databricks Connect to SQL Server.

Limitations of Writing Custom Code to Set up Databricks Connect to SQL Server

There are a few limitations of using Manual ETL Scripts to Connect Datascripts to SQL Server. Some of them are listed below:

  • Manually writing ETL Scripts requires significant technical bandwidth.
  • If you need the data to be transferred in real-time, writing custom scripts to accomplish this can be tricky, as it can lead to a compromise in Data Accuracy and Consistency.
  • This approach is suitable for a one-time bulk insert. In most cases, you will need to execute a continuous load process to ensure that the destination always receives the latest data. In that case, you will need logic to handle the duplicate data in real-time.
  • You will be required to maintain and add to the custom scripts when you scale this solution. This can consume a lot of time and could result in errors and inconsistencies in the data.
  • During the implementation of this custom solution, you may not have technical assistance. If a problem is encountered, you may have limited help and resources.

Use Cases of Azure Databricks SQL Server Integration

  • Big Data Analytics: Azure Databricks enables data engineering and data science teams to process large datasets using Spark. On the other hand, SQL plays can streamline the process of gaining insights from huge volumes of data by using a variety of conditional commands. Thus, Azure Databricks SQL Server integration makes Big Data Analytics efficient. 
  • Healthcare: Healthcare organizations use SQL to analyze large data sets containing information about patient behavior, medical conditions, and demographics. It facilitates medical professionals to gain critical insight, which contributes toward developing specific solutions for solving problems. Integrating Azure Databricks with SQL Server can optimize the functioning of the healthcare sector. 
  • Social Media: Social media networks process significant data every second. To store users’ profile information, use SQL on platforms such as Snapchat and Instagram. When users create new posts or share photos, the Structured Query Language allows them to update the app database. It also enables the recording of messages which users can retrieve later.

Read More About: Databricks Connect to Oracle Database

Conclusion

This blog introduced you to two methods that can be used to set up Databricks Connect to SQL Server. It gives comprehensive information on how to connect Databricks to SQL Server. It also briefed you about SQL Server and Databricks along with their features. If you want to integrate data from various data sources such as SQL Server into your desired Database/destination like Databricks and seamlessly visualize it in a BI tool of your choice, Hevo Data is the right choice for you! It will help simplify the ETL and management process of data sources and destinations.

Sign up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Frequently Asked Questions

1. Is Databricks SQL the same as Spark SQL?

Databricks SQL is built on Spark SQL but provides additional features and an optimized environment for SQL analytics within the Databricks platform.

2. Can I use SQL with Spark?

Yes, you can use SQL with Spark through Spark SQL.

3. Can we connect to SQL Server from Databricks?

Yes, you can connect to SQL Server from Databricks.

Rakesh Tiwari
Former Research Analyst, Hevo Data

Rakesh is a research analyst at Hevo Data with more than three years of experience in the field. He specializes in technologies, including API integration and machine learning. The combination of technical skills and a flair for writing brought him to the field of writing on highly complex topics. He has written numerous articles on a variety of data engineering topics, such as data integration, data analytics, and data management. He enjoys simplifying difficult subjects to help data practitioners with their doubts related to data engineering.