Unlock the full potential of your AWS RDS Oracle data by integrating it seamlessly with Databricks. With Hevo’s automated pipeline, get data flowing effortlessly—watch our 1-minute demo below to see it in action!
While AWS RDS Oracle offers a robust relational database solution over the cloud, Databricks simplifies big data processing with features such as automated scheduling and optimized Spark clusters. Integrating data from AWS RDS Oracle to Databricks enables you to handle large volumes of data within a collaborative workspace to derive actionable insights in real-time.
This article is a comprehensive guide for AWS RDS Oracle Databricks integration, where you will learn how to connect AWS RDS Oracle to Databricks using two different methods. It will also shed some light on the benefits of this integration.
Overview of AWS RDS Oracle
AWS RDS Oracle is a fully managed database provided by AWS RDS. It is easy to set up and operate. With the AWS RDS service, you can simply focus on administrative tasks such as provisioning, monitoring, backups, software patching, etc. The AWS RDS Oracle makes production workloads easy with its Multi-AZ deployment options. Its built-in automated failover feature allows you to synchronize your primary database to a replicated secondary database in case of failure.
Effortlessly integrate data from AWS RDS Oracle to Databricks, enabling robust analytics and streamlined workflows. With this connection, leverage Databricks’ powerful analytics on your Oracle data in real time.
- Seamless Data Transfer: Move data from AWS RDS Oracle to Databricks without complex coding.
- Real-Time Analytics: Unlock insights with Databricks’ high-performance analytics capabilities.
- Scalable Solution: Efficiently handle data at scale for growing data needs.
Discover how easy it is to bring your Oracle data to life in Databricks!
Get Started with Hevo for Free
Overview of Databricks
Databricks is an open analytics unified platform for building, sharing, deploying, and maintaining organization data at scale. It is built on a lakehouse foundation that brings together the best elements of a data warehouse and a data lake to manage your data efficiently. Databricks’s intelligent data platform seamlessly and securely integrates with other cloud storage systems. It also provides a collaborative workspace for data engineers, scientists, and analytics professionals to work together on data pipelines, machine learning models, and transformation.
Methods for Migrating Data from AWS RDS Oracle to Databricks
Let’s look at two methods that will teach you how to load AWS RDS Oracle to Databricks.
Method 1: Integrating Data from AWS RDS Oracle to Databricks Using Hevo
Hevo is an ELT data integration platform that simplifies data integration with its no-code real-time automated data pipeline. It provides 150+ data sources from where you can integrate your data into your desired destination. Hevo’s data pipeline is flexible to your work needs and can be set up quickly with minimal effort, enabling you to transfer AWS RDS Oracle data to Databricks.
RDS Oracle to Databricks in 2 Steps
No credit card required
Step 1: Configure Amazon RDS Oracle as Your Source
Prerequisites
Follow the steps to configure your source:
- Click on PIPELINES in the Navigation Bar.
- Click +CREATE
- In the Select Source Type page, select Amazon RDS Oracle.
- Provide the mandatory details on Configure Amazon RDS Oracle as your source page.
For more information, refer to the Hevo documentation on configuring Amazon RDS Oracle as your source.
Step 2: Configure Databricks as Your Destination
Prerequisites
- You must have an active Azure, AWS, or GCP account.
- Create a Databricks workspace on your cloud storage account.
- The URL for your Databricks workspace must be in this format – https://<deployment name>.cloud.databricks.com.
- To connect to your Database workspace, you must:
- Create a Database cluster or an SQL workspace.
- Port name, database hostname, and HTTP Path must be available.
- Personal Access Token (PAT) must be available.
- To create the destination in the Hevo data pipeline, you must be assigned the role of any administrator except the Billing Administrator.
You can configure Databricks as your destination using the recommended method: Databricks Partner Connector. Follow the steps below to configure your destination:
- Click on DESTINATIONS in the Navigation Bar.
- Click +CREATE in Destination View Lists.
- On the Add Destination page, search and select Databricks as your destination.
- On the Configure Databricks as your Destination page, specify the mandatory details:
For more information, refer to the Hevo documentation on configuring Databricks as the destination.
Benefits of Using Hevo for AWS RDS Oracle Databricks Integration
- Automated Data Mapping: Hevo’s automated schema mapping feature learns and replicates your source data’s schema into your destination without having to do it manually.
- Data Transformation: Hevo uses various data transformation techniques, such as drag-and-drop and Python-based transformations. These techniques help you clean and prepare your data for analysis.
- Incremental Data Loading: Hevo enables you to transfer your modified source data in real-time to your destination, optimizing the use of bandwidth at both ends.
Interested in reading about how to transfer data from Amazon Cloud Services to Databricks? You can refer to this article on how to replicate data from Amazon RDS to Databricks.
Method 2: Exporting Data from AWS RDS Oracle to Databricks File Using a S3 Bucket
This method will teach you how to load AWS RDS Oracle file to Databricks table using an S3 Bucket.
Step 1: Transferring File from AWS RDS Oracle Instance to S3 Bucket
You can use the Amazon RDS package rdsadmin_s3_tasks to transfer files between your Oracle DB instance and S3.
Prerequisites
- Your Amazon DB instance and Amazon S3 Bucket should be in the same AWS region.
- Configure IAM permissions to integrate Oracle instance with S3 Bucket.
- To integrate the RDS Oracle instance into the S3 Bucket, you must associate your DB instance with a group that includes the S3_INTEGRATION option.
- You must upload files from a user-created directory object or data pump directory; it can’t be used as a directory for backup processes.
- Before transferring the files to an S3 Bucket, you must go through the requirements for file transfer.
You can upload files from an Oracle DB instance to S3 Bucket in a compressed GZIP file and decompress it when unloading the data. In this example, you will learn how to upload files from the directory that are not compressed.
- You must use the rdsadmin.rdsadmin_s3_tasks.upload_to_s3 procedure to upload files from an Oracle DB instance directory into an S3 Bucket. It will return a value in the Task ID.
- Execute the following command to upload files from the DATA_PUMP_DIR directory to an S3 Bucket named ‘mys3bucket’.
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
p_bucket_name => 'mys3bucket',
p_prefix => '',
p_s3_prefix => '',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
- Query Type: This is a
SELECT
query that calls a function.
- Function Call: It uses the
rdsadmin.rdsadmin_s3_tasks.upload_to_s3()
function to upload files from an Oracle database to an Amazon S3 bucket.
- Parameters:
- p_bucket_name:
'mys3bucket'
– Specifies the name of the S3 bucket where the data will be uploaded.
- p_prefix:
''
– No specific prefix is provided for filtering files (empty value).
- p_s3_prefix:
''
– No prefix for organizing the uploaded files in the S3 bucket (empty value).
- p_directory_name:
'DATA_PUMP_DIR'
– Indicates the directory within the Oracle database from which files will be uploaded.
- Result: The query returns a
TASK_ID
as the output, representing the upload task’s ID.
- The SELECT command will return an ID of the task (Replace ‘task-id’ with ID returned by procedure) in the varchar2 data type. You can view the output by displaying the task file using the following command:
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-task-id.log'));
Step 2: Loading Data from S3 Bucket to Databricks SQL Warehouse Using COPY_INTO Command
Prerequisites
- To access and read from S3 Bucket, you need to configure data access for ingestion.
- You need to enable Can manage permission for Databricks warehouse.
- The fully qualified S3 URI must be available.
- You must be familiar with the Databricks SQL interface.
Follow the steps below to transfer AWS RDS Oracle data to Databricks.
Integrate Oracle to Databricks
Integrate Oracle on Amazon RDS to Databricks
Verify Your Access to Cloud Storage
- On your Databricks sidebar menu, select Click>Query.
- Inside the SQL editor menu, select the SQL warehouse and paste the code below:
select * from csv.<path> , where you replace ‘path’ with S3 URI.
Create a Table in Your Databricks Workspace
- In the SQL editor, paste and execute the following code:
CREATE TABLE <catalog_name>.<schema_name>.<table_name> (
tpep_pickup_datetime TIMESTAMP,
tpep_dropoff_datetime TIMESTAMP,
trip_distance DOUBLE,
fare_amount DOUBLE,
pickup_zip INT,
dropoff_zip INT
);
- Table Name:
<catalog_name>.<schema_name>.<table_name>
represents where the table will be created (replace these placeholders with actual catalog, schema, and table names).
- Columns:
- tpep_pickup_datetime:
TIMESTAMP
– A column to store the pickup date and time.
- tpep_dropoff_datetime:
TIMESTAMP
– A column to store the drop-off date and time.
- trip_distance:
DOUBLE
– A column to store the distance traveled during the trip.
- fare_amount:
DOUBLE
– A column to store the fare amount for the trip.
- pickup_zip:
INT
– A column to store the ZIP code where the trip started.
- dropoff_zip:
INT
– A column to store the ZIP code where the trip ended.
- Table Definition: It defines the structure (columns and their data types) of the table to store taxi trip data.
Load Data from the S3 Bucket to the Databricks Table
- In the side menu, click on Click>Query.
- In the editor pane, select the SQL warehouse that is running.
- Paste the code in the editor and replace <s3 bucket> with your S3 bucket name and <folder> with the name of your S3 folder and execute the query.
COPY INTO <catalog-name>.<schema-name>.<table-name>
FROM 's3://<s3-bucket>/<folder>/'
FILEFORMAT = CSV
FORMAT_OPTIONS (
'header' = 'true',
'inferSchema' = 'true'
)
COPY_OPTIONS (
'mergeSchema' = 'true'
);
SELECT * FROM <catalog_name>.<schema_name>.<table_name>;
- COPY INTO: Loads CSV data from S3 into a specified table.
- FILEFORMAT: Specifies the data is in CSV format.
- FORMAT_OPTIONS:
'header' = 'true'
: First row has column names.
'inferSchema' = 'true'
: Automatically detects schema.
- COPY_OPTIONS:
'mergeSchema' = 'true'
: Allows schema merging if different.
- SELECT: Retrieves all data from the table after loading.
- This query will load your data into the Databricks table you created.
Limitations for Exporting Data from AWS RDS Oracle to Databricks File Using a S3 Bucket
- The AWS RDS for Oracle can’t access data in Amazon S3 Bucket encrypted with SSE-C, as the Oracle database does not natively support server-side encryption using customer-provided keys.
- After integration, you need to perform some time-consuming extra tasks. These include monitoring transfer in the S3 Bucket and deleting and cleaning data from the temporary storage.
Use Cases of AWS RDS Oracle into Databricks
- Databricks helps you analyze your Oracle instance data and generate data-driven insights for training programming, employee retention, or organizational development decisions.
- Through Databricks analytics capabilities, you can also analyze your social media data, identify emerging trends, and enhance your customer engagement.
- Databrick also provides a unified platform, allowing you to collaborate seamlessly with your teams on data projects.
Why Integrate Data from AWS RDS Oracle into Databricks?
- Multi-Language Support: Integrating AWS RDS Oracle data to Databricks provides you with a scalable and unified platform that supports multiple languages, such as Python, SQL, R, and more.
- Advanced Analytics Capabilities: Databricks also has advanced analytics capabilities, such as machine learning (ML), enabling your team to develop and deploy ML models using data stored in your Oracle DB instance.
- AI-Generated Analytics: Databricks facilitate AI-generated analytics that support deep learning frameworks and help you implement complex models like natural language processing or image recognition.
Read More About: Databricks Connect to Oracle Database
Conclusion
Migrating your AWS RDS Oracle data to Databricks can help you drive collaboration and enhance your organization’s overall productivity. You can also streamline data processing using various Databricks features, such as advanced analytics, machine learning, and AI. While you can use S3 Bucket as temporary storage for integration, an ELT platform like Hevo can simplify the data ingestion process with its automated data pipelines.
Want to take Hevo for a spin? Sign up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable Hevo pricing that will help you choose the right plan for your business needs.
FAQs (Frequently Asked Questions)
How can you connect Databricks hosted on Azure with an RDS on AWS?
Can you connect Databricks to the AWS RDS instance without using the JDBC/ODBC driver?
You can connect your Databricks application to an AWS RDS instance through an Amazon S3 Bucket or Hevo’s data pipeline. For more information, refer to Hevo documentation—AWS RDS to Databricks.
With a strong background in market research for data science and cybersecurity products, Saloni is an expert at crafting informative articles on key topics within the data science domain, such as data transformation, processes, and analysis. Saloni's passion for the field drives her to continually learn and stay abreast of emerging technologies and trends, ensuring her contributions are impactful. Her work aims to enrich the discourse in data science, providing valuable insights and fostering a deeper understanding of complex subjects.