Airflow is a Task Automation tool. It helps organizations to schedule their tasks so that they are executed when the right time comes. This relieves the employees from doing tasks repetitively.

When using Airflow, you will want to access it and perform some tasks from other tools. Furthermore, Apache Airflow is used to schedule and orchestrate data pipelines or workflows.

In this article, you will gain information about Airflow Redshift Operators. You will also gain a holistic understanding of Apache Airflow, Amazon Redshift, their key features, Amazon AWS Operators in Airflow, and the different Airflow Redshift Operators. Read along to find out in-depth information about Airflow Redshift Operators.

What is Airflow?

Airflow Redshift Operator: Airflow logo

Airflow is a platform that enables its users to automate scripts for performing tasks. It comes with a scheduler that executes tasks on an array of workers while following a set of defined dependencies.

Airflow also comes with rich command-line utilities that make it easy for its users to work with directed acyclic graphs (DAGs). The DAGs simplify the process of ordering and managing tasks for companies. 

Airflow also has a rich user interface that makes it easy to monitor progress, visualize pipelines running in production, and troubleshoot issues when necessary. 

Key Features of Airflow

  • Dynamic Integration: Airflow uses Python as the backend programming language to generate dynamic pipelines. Several operators, hooks, and connectors are available that create DAG and tie them to create workflows.
  • Extensible: Airflow is an open-source platform, and so it allows users to define their custom operators, executors, and hooks. You can also extend the libraries so that it fits the level of abstraction that suits your environment.
  • Elegant User Interface: Airflow uses Jinja templates to create pipelines, and hence the pipelines are lean and explicit. Parameterizing your scripts is a straightforward process in Airflow.
  • Scalable: Airflow is designed to scale up to infinity. You can define as many dependent workflows as you want. Airflow creates a message queue to orchestrate an arbitrary number of workers. 
Seamlessly Migrate Data to Redshift with Hevo

Key Features of Hevo for Redshift Migration:

  1. No-Code Setup: Effortlessly connect and migrate data from various sources to Redshift without writing any code.
  2. Automated Data Transformation: Automatically transform and enrich your data, making it analysis-ready for Redshift.
  3. Real-Time Data Integration: Ensure real-time data synchronization and minimize latency during the migration process.
Get Started with Hevo for Free

What is Amazon Redshift?

Amazon Redshift Logo

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It enables fast query performance using columnar storage technology and parallel processing to handle large volumes of data efficiently.

Key Features of Amazon Redshift

  • Scalable Performance: Redshift scales to handle petabytes of data with high performance, using massively parallel processing (MPP) and columnar storage.
  • Managed Service: It is a fully managed service that automates administrative tasks like backups, patching, and monitoring, reducing operational overhead.
  • SQL-Based Queries: Redshift supports standard SQL queries and integrates with popular BI tools and SQL clients, making it accessible for users familiar with SQL.
  • Cost-Effective Pricing: Offers pay-as-you-go pricing with options for reserved instances to optimize costs based on your usage needs.
  • Data Integration: Easily integrates with AWS services like S3, DynamoDB, and data lakes, as well as other ETL tools, facilitating data ingestion and management.

What are Amazon AWS Operators in Airflow?

  • S3ToRedshiftOperator: Facilitates loading data from an Amazon S3 bucket into an Amazon Redshift table, automating data ingestion tasks.
  • RedshiftToS3Operator: Allows you to export data from an Amazon Redshift table to an Amazon S3 bucket, supporting data extraction and backup.
  • RedshiftSQLOperator: Executes SQL queries directly on Amazon Redshift, enabling you to run SQL commands and manage your Redshift cluster from within Airflow.
  • RedshiftDataApiOperator: This class interacts with the Redshift Data API to run queries on Amazon Redshift. It is useful for integrating with applications and running dynamic queries.
  • RedshiftClusterCreateOperator: Creates a new Amazon Redshift cluster, useful for provisioning clusters as part of your Airflow workflows and managing cluster lifecycle

For further information on Apache Airflow Providers in integration with Amazon AWS, you can visit here.

Load Data from MongoDB to Redshift
Load Data from Amazon S3 to Redshift
Load Data from Amazon DocumentDB to Redshift

What are the Airflow Redshift Operators?

The different Airflow Redshift Operators are as follows:

1) Redshift Cluster Management Operators

The Airflow Redshift Operators which come under the category of Redshift Cluster Management operators are as follows:

A) Resume a Redshift Cluster

The RedshiftResumeClusterOperator is the Airflow Redshift Operator that can be used to resume a ‘paused‘ AWS Redshift Cluster. The AWS CLI resume-cluster API is leveraged by this Operator.

B) Pause a Redshift Cluster

The RedshiftPauseClusterOperator is the Airflow Redshift Operator that can be used to pause an “available” AWS Redshift Cluster. The AWS CLI pause-cluster API is leveraged by this Operator.

2) RedshiftSQLOperator

The RedshiftSQLOperator is used to execute statements against an Amazon Redshift cluster. This Airflow Redshift Operator collaborates with RedshiftSQLHook to connect to Amazon Redshift.

A) Example

example_redshift.py

This example showcases the RedshiftSQLOperator in action.

I) Purpose

This is a simple example dag for executing statements against an Amazon Redshift cluster using RedshiftSQLOperator.

II) Create a table

A table named “fruit” is created in the following code given below.

Source file: airflow/providers/amazon/aws/example_dags/example_redshift.py

    setup__task_create_table = RedshiftSQLOperator(
        task_id='setup__create_table',
        sql="""
            CREATE TABLE IF NOT EXISTS fruit (
            fruit_id INTEGER,
            name VARCHAR NOT NULL,
            color VARCHAR NOT NULL
            );
        """,
    )
III) Insert data into a table

A few sample rows are inserted into the “fruit” table in the following code given below.

Source File: airflow/providers/amazon/aws/example_dags/example_redshift.py

    task_insert_data = RedshiftSQLOperator(
        task_id='task_insert_data',
        sql=[
            "INSERT INTO fruit VALUES ( 1, 'Banana', 'Yellow');",
            "INSERT INTO fruit VALUES ( 2, 'Apple', 'Red');",
            "INSERT INTO fruit VALUES ( 3, 'Lemon', 'Yellow');",
            "INSERT INTO fruit VALUES ( 4, 'Grape', 'Purple');",
            "INSERT INTO fruit VALUES ( 5, 'Pear', 'Green');",
            "INSERT INTO fruit VALUES ( 6, 'Strawberry', 'Red');",
        ],
    )
IV) Fetching records from a table

A new table named “more_fruit” is created from the existing “fruit” table.

Source File: airflow/providers/amazon/aws/example_dags/example_redshift.py

    task_get_all_table_data = RedshiftSQLOperator(
        task_id='task_get_all_table_data', sql="CREATE TABLE more_fruit AS SELECT * FROM fruit;"
    )
V) Passing Parameters into RedshiftSQLOperator

The parameters attribute of RedshiftSQLOperator allows you to pass parameters into SQL statements dynamically.

Source File: airflow/providers/amazon/aws/example_dags/example_redshift.py

task_get_with_filter = RedshiftSQLOperator(
        task_id='task_get_with_filter',
        sql="CREATE TABLE filtered_fruit AS SELECT * FROM fruit WHERE color = '{{ params.color }}';",
        params={'color': 'Red'},
    )

B) The complete RedshiftSQLOperator DAG

The overall DAG with everything combined is given below.

Source File: airflow/providers/amazon/aws/example_dags/example_redshift.py

from airflow import DAG
from airflow.providers.amazon.aws.operators.redshift_sql import RedshiftSQLOperator

with DAG(
    dag_id="redshift",
    start_date=datetime(2021, 1, 1),
    schedule_interval=None,
    catchup=False,
    tags=['example'],
) as dag:
    setup__task_create_table = RedshiftSQLOperator(
        task_id='setup__create_table',
        sql="""
            CREATE TABLE IF NOT EXISTS fruit (
            fruit_id INTEGER,
            name VARCHAR NOT NULL,
            color VARCHAR NOT NULL
            );
        """,
    )
    task_insert_data = RedshiftSQLOperator(
        task_id='task_insert_data',
        sql=[
            "INSERT INTO fruit VALUES ( 1, 'Banana', 'Yellow');",
            "INSERT INTO fruit VALUES ( 2, 'Apple', 'Red');",
            "INSERT INTO fruit VALUES ( 3, 'Lemon', 'Yellow');",
            "INSERT INTO fruit VALUES ( 4, 'Grape', 'Purple');",
            "INSERT INTO fruit VALUES ( 5, 'Pear', 'Green');",
            "INSERT INTO fruit VALUES ( 6, 'Strawberry', 'Red');",
        ],
    )
    task_get_all_table_data = RedshiftSQLOperator(
        task_id='task_get_all_table_data', sql="CREATE TABLE more_fruit AS SELECT * FROM fruit;"
    )
    task_get_with_filter = RedshiftSQLOperator(
        task_id='task_get_with_filter',
        sql="CREATE TABLE filtered_fruit AS SELECT * FROM fruit WHERE color = '{{ params.color }}';",
        params={'color': 'Red'},
    )

    setup__task_create_table >> task_insert_data >> task_get_all_table_data >> task_get_with_filter

3) S3ToRedshiftOperator

The S3ToRedshiftOperator transfers data from an S3 bucket to a Redshift table.

A) Example

example_s3_to_redshift.py

The example provided by dag showcases the S3ToRedshiftOperator inaction.

I) Purpose

This is a simple example dag that uses the S3ToRedshiftOperator to copy data from an S3 bucket into a Redshift table.

II) Environment variables

This example relies on the following variables, which can be passed via OS environment variables.

Source file: airflow/providers/amazon/aws/example_dags/example_s3_to_redshift.py

S3_BUCKET = getenv("S3_BUCKET", "test-bucket")
S3_KEY = getenv("S3_KEY", "key")
REDSHIFT_TABLE = getenv("REDSHIFT_TABLE", "test_table")

You also need to set the S3_BUCKET at least.

III) Copy S3 key into Redshift table

The following code given below copies the S3 key s3://{S3_BUCKET}/{S3_KEY}/{REDSHIFT_TABLE} into the Redshift table PUBLIC.{REDSHIFT_TABLE}.

Source file: airflow/providers/amazon/aws/example_dags/example_s3_to_redshift.py

    task_transfer_s3_to_redshift = S3ToRedshiftOperator(
        s3_bucket=S3_BUCKET,
        s3_key=S3_KEY,
        schema="PUBLIC",
        table=REDSHIFT_TABLE,
        copy_options=['csv'],
        task_id='transfer_s3_to_redshift',
    )

For further information about the Airflow Redshift Operator, S3ToRedshift Operator, you can visit here.

Learn More About:

Understanding Airflow BigQuery Operators

Conclusion

In this article, you have learned about Airflow Redshift Operators. This article also provided information on Apache Airflow, Amazon Redshift, their key features, Amazon AWS Operators in Airflow, and the different Airflow Redshift Operators in detail.

For further information on Airflow ETL, Airflow Databricks Integration, Airflow REST API, you can visit the following links. Hevo Data, a No-code Data Pipeline provides you with a consistent and reliable solution to manage data transfer between a variety of sources and a wide variety of Desired Destinations with a few clicks.

FAQ on Airflow Redshift Operator

1. What is a Redshift ODBC driver?

The Redshift ODBC driver is a software component that allows applications to connect to Amazon Redshift from systems using the Open Database Connectivity (ODBC) standard. It enables data access and manipulation within Redshift using ODBC-compatible tools and applications, facilitating integration with various BI and analytics platforms.

2. What is Redshift used for?

Amazon Redshift is primarily used for:
Data Warehousing: Storing and analyzing large volumes of structured data in a scalable, high-performance environment.
Business Intelligence (BI): Running complex queries and generating reports and dashboards with business intelligence tools.
Big Data Analytics: Performing large-scale data analysis with fast query performance and integrating big data tools and data lakes.

3. What is the tilde operator in Redshift?

The tilde (~) operator in Amazon Redshift is used as a bitwise NOT operator. It performs a bitwise negation on integer values, flipping each bit in the binary number representation. This operator is useful for manipulating and querying binary data.

Want to take Hevo for a spin? Sign Up or a 14-day free trial and experience the feature-rich Hevo suite firsthand. Also checkout our unbeatable pricing to choose the best plan for your organization.

Share your experience of understanding Apache Airflow Redshift Operators in the comment section below! We would love to hear your thoughts.

Talha
Software Developer, Hevo Data

Talha is a Software Developer with over eight years of experience in the field. He is currently driving advancements in data integration at Hevo Data, where he has been instrumental in shaping a cutting-edge data integration platform for the past four years. Prior to this, he spent 4 years at Flipkart, where he played a key role in projects related to their data integration capabilities. Talha loves to explain complex information related to data engineering to his peers through writing. He has written many blogs related to data integration, data management aspects, and key challenges data practitioners face.