Summary IconKey Takeaways

There are two proven ways to migrate your data from Oracle to Redshift, each fitting different migration scenarios:

  • Using Hevo’s Migration Platform: This is ideal if you want a smooth migration without complex technical setup. Hevo automates data extraction, handles format conversions, and continuously loads your Oracle data into Redshift with minimal downtime.
  • Using AWS Database Migration Service (DMS): This method suits organizations with dedicated database teams who want granular migration control. You design exactly how data transforms during migration, with complete freedom to optimize performance based on your specific use cases.

Moving your database from Oracle to Redshift unlocks significant value: enhanced data warehousing, cost-effective analytics, easy integration with AWS services, and superior performance for large-scale queries that support critical business intelligence.

This guide explores two reliable strategies to migrate your data from Oracle to Redshift. Whether you’re a scaling organization wanting simplified migration or a major corporation needing comprehensive process oversight, we’ll specify exactly which pathway fits your circumstances. You’ll gain complete step-by-step procedures, solutions for typical conversion challenges, and field-tested examples to achieve seamless database migration.

By the end of this walkthrough, you’ll have your infrastructure completely converted to Redshift, ready for large-scale data warehousing. Let’s begin.

Methods to load data from Oracle to Redshift

Method 1: Using Hevo Data to Set up Oracle to Redshift Integration

Prerequisites

  • Oracle database (version 11 or above) with SYSDBA privileges for user creation.
  • Amazon Redshift instance running with database available.
  • Assigned Team Administrator, Collaborator, or Pipeline Administrator role in Hevo.
  • Redshift database hostname and port number accessible.

Step 1: Create Oracle Database User and Grant Privileges

  • Connect to Oracle server as DBA using SQL Developer
CREATE USER <username> IDENTIFIED BY <password>;

GRANT SELECT ANY DICTIONARY to <username>;

GRANT CREATE SESSION, ALTER SESSION TO <username>;

GRANT SELECT ON ALL_VIEWS TO <username>;

GRANT SELECT ON <schema_name>.<table_name> TO <username>;
  • Grant LogMiner permissions for real-time replication:
GRANT LOGMINING TO <username>;

GRANT SELECT ON SYS.V_$DATABASE TO <username>;

GRANT EXECUTE ON SYS.DBMS_LOGMNR TO <username>;

Step 2: Configure Redo Logs and Archive Settings

  • Enable Archive Log mode for data replication:
SELECT LOG_MODE FROM V$DATABASE;

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE ARCHIVELOG;

ALTER DATABASE OPEN;
  • Set retention policy to minimum 3 days and enable supplemental logging:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Step 3: Configure Oracle as Source in Hevo

  • In Hevo dashboard, go to Pipelines → + CREATE PIPELINE → Select Oracle.
  • Configure connection settings:
    • Pipeline Name (max 255 characters).
    • Database Host (IP or DNS) and Port (default 1521).
    • Database User and Password (created in Step 1).
    • Select Ingestion Mode (RedoLog recommended).
    • Service Name: Get with select name from v$database;
  • Click TEST CONNECTION to verify settings.

Step 4: Set Up Redshift as Destination

Prerequisites

  • Hevo IP addresses whitelisted in Redshift security groups.
  • Redshift database user created with CREATE and SELECT privileges.

Whitelist Hevo IPs and Create User

  • In Amazon Redshift dashboard, go to Clusters → Select your cluster → Properties.
  • Under Network and security settings, click VPC security group → Edit inbound rules.
  • Add Hevo IP addresses for your region with Redshift type.
  • Log in to Redshift as superuser and create user:
GRANT CREATE ON DATABASE <database_name> TO <user>;

GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <user>;

Step 5: Configure Redshift as Destination and Complete Setup

  • In Hevo, go to Destinations → + CREATE DESTINATION → Select Amazon Redshift.
  • Configure destination:
    • Destination Name (max 255 characters).
    • Database Cluster Identifier (hostname without jdbc:redshift://).
    • Database Port (default 5439), User, Password, and Database Name.
    • Schema Name (default: public).
  • Optional settings:
    • Enable Connect through SSH for secure tunnels.
    • Enable Sanitize Table/Column Names for naming consistency.
  • Click TEST CONNECTION, then SAVE & CONTINUE after validation.
  • Complete pipeline setup and activate to start Oracle to Redshift replication.

You can now create the pipeline to start migrating your Oracle data to Redshift automatically and effortlessly with Hevo!

Want to integrate Oracle to Redshift in minutes?

Method 1: Custom ETL Scripts to Load Data from Oracle to Redshift

In this method, you can integrate your data from Oracle to Redshift using Custom ETL Scripts in a full load or incremental load manner. Even though this method works, but it can be complex and time-consuming.

Method 2: Setting Up Oracle to Redshift Integration using Hevo Data 

Alternatively, using Hevo you can be rest assured that your integration will take place seamlessly in just two easy steps. Hevo supports 150+ sources like Oracle to destination warehouses such as Redshift, all you need to do is provide Hevo access to your data.

Get Started with Hevo for Free

Method 2: Using AWS Database Migration Service (DMS)

Prerequisites

  • Oracle source database with CDC (Change Data Capture) enabled.
  • Amazon Redshift cluster running with appropriate instance types.
  • AWS DMS replication instance provisioned in same VPC.
  • IAM roles configured for DMS service permissions.

Step 1: Set Up AWS DMS Replication Instance

  • In AWS Console, go to Database Migration Service → Replication instances.
  • Create replication instance with:
    • Instance class based on data volume and performance needs.
    • VPC and subnet group configuration for network access.
    • Security groups allowing Oracle and Redshift connectivity.
  • Configure Multi-AZ deployment for high availability if required.
  • Wait for instance to become available before proceeding.

Step 2: Create Source and Target Endpoints

  • Create Oracle source endpoint:
    • Provide Oracle hostname, port, database name, and credentials.
    • Configure SSL encryption if required for secure connections.
    • Test endpoint connectivity using DMS test feature.
  • Create Redshift target endpoint:
    • Specify Redshift cluster endpoint, port, database, and user credentials.
    • Configure connection attributes for optimal loading performance.
    • Test Redshift endpoint connectivity and permissions.

Step 3: Configure Database Migration Task

  • Create new DMS migration task with:
    • Migration type (Full load, CDC, or Full load + CDC).
    • Source and target endpoints selected from Step 2.
    • Table mapping rules to specify which tables to migrate.
  • Configure advanced settings:
    • Parallel load settings for faster migration.
    • Error handling and logging preferences.
    • Data validation settings to ensure accuracy.

Step 4: Set Up Table Mappings and Transformations

  • Define table mapping rules using JSON or DMS console:
    • Specify schema and table inclusion/exclusion patterns.
    • Configure column-level transformations if needed.
    • Set up data type mappings for Oracle to Redshift conversion.
  • Configure transformation rules for:
    • Column renaming and data type conversions.
    • Row filtering based on business logic.
    • Schema name transformations if required.

Step 5: Execute Migration and Monitor Progress

  • Set up ongoing CDC replication for real-time synchronization.
  • Start the DMS migration task and monitor initial full load.
  • Track migration statistics through DMS console:
  • Tables completed, rows migrated, and error counts.
  • Replication lag for ongoing CDC operations.
  • Configure CloudWatch alarms for migration task monitoring.
  • Validate migrated data in Redshift using sample queries and row counts.

Overview of Oracle

Oracle logo

Oracle Corporation offers an integrated workgroup of enterprise software, mostly spotlighted by its relational database management system. Oracle Database is highly rated in terms of performance, scalability, and reliability in service delivery, suiting diverse business applications. 

It supports complex transactions, high-volume data processing, and enterprise-level applications. Oracle Database also supports advanced features like in-memory processing, real-time analytics, and multi-model capabilities, making it a versatile solution for diverse data management needs.

Use Cases

Let’s discuss some of Oracle’s key use cases.

  • Enterprise Resource Planning (ERP): By using Oracle, you can manage core business processes such as finance, HR and supply chain.
  • Customer Relationship Management (CRM): You can also use it for customer data handling, sales processes, and marketing effort handling.
  • Application Development: It supports the development and fielding of complex applications with high availability and performance requirements.

Overview of Amazon Redshift

Redshift logo

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It provides swift query and analytics functionality across large sets of data using SQL. Redshift is optimised for high-performance analytics and works seamlessly with other AWS services. It provides scalability, cost efficiency, and usability to big data environments.

Use Cases

  • Data Warehousing: It centralises large amounts of structured data from various sources and analyzes the data with the purpose of business intelligence and reporting.
  • Business Intelligence (BI): It can integrate with BI tools to visualise and explore data for strategic decision-making.
  • Real-Time Analytics: Inspecting streaming data and logs nearly in real time, it is capable of monitoring and reacting to operational metrics.

You can also read about:

Conclusion

Furthermore, Hevo has an intuitive user interface that lets, even the not-so-technical people, easily tweak the parameters of your data load settings. This would come in super handy once you have everything up and running.

With Hevo, you can achieve seamless and accurate data replication from Oracle to Redshift. With its fault-tolerant architecture, Hevo ensures that no data is lost while loading. This empowers you to focus on the right projects instead of worrying about data availability.

Want to take Hevo for a spin? Sign up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

Share your experience of loading data from Oracle to Redshift in the comments section below!

FAQs to integrate Oracle to Redshift

1. How do you connect Oracle to Redshift?

To connect Oracle to Redshift, use an ETL tool like AWS Glue or a data integration service like Fivetran to extract data from Oracle and load it into Redshift. Alternatively, use a data migration tool to create a direct connection and perform data transfer operations.

2. How do I create ODBC connection to Redshift?

To create an ODBC connection to Redshift, install the Amazon Redshift ODBC driver, then configure the ODBC Data Source Administrator with the Redshift cluster endpoint, port, and database credentials. Test the connection to ensure successful integration.

3. How to migrate an Oracle Database?

To migrate an Oracle database, use Oracle Data Pump to export the database schema and data, then import it into the target database.

Avinash Mohanakrishnan
Freelance Technical Content Writer, Hevo Data

Avinashm loves blending his problem-solving skills with analytical thinking to dissect the complexities of data integration and analysis. He has extensive experience producing well-researched content tailored for helping businesses in the data industry.