You’re trying to extract data from your source to Redshift, but you can’t seem to find a tool that provides a native connector. So what do you do in that case?

REST APIs serve as the “middlemen” that allow you to move data from your source to Redshift. 

First, we’ll go over how you can use custom code to do this.

REST APIs don’t talk to each other without external help (because they aren’t standards or protocols). This might be a bit cumbersome while integration and is a primary hassle that developers face while transferring data.

This is where a REST API connector comes in handy. 

In this article, we’ll also be covering how you can use Hevo’s REST API connector to set up REST API to Redshift Integration.

Redshift REST API Integration Methods

Method 1: Using Custom Code for Redshift REST API Integration

In this REST API Redshift connection method, we’ll extract data from REST API using Python. This will be extracted in JSON format. It can then be exported to Redshift using the Redshift COPY command. 

Step 1: Using Python to Extract REST API Data in JSON Format

  • You can pull data from a REST API in Python with the following code snippet:
import requests
import json
api_response = requests.get(<REST API endpoint URL>)
  • In the code snippet mentioned above, add the URL for your REST API endpoint in the ‘get’ method. Here, the result gets stored in the variable api_response.
  • Next, we’ll save this response as a JSON file to the local device:
with open(filename, 'w') as f:
    f.write(api_response.text)

Now that we’ve saved the REST API response as a JSON file, we’ll move this data to Redshift using the COPY command.

Step 2: Using Redshift COPY Command to Connect REST API to Redshift

The Redshift COPY command is a flexible interface that you can use to load data to Redshift from a vast multitude of sources.

While using COPY command with JSON data you can mention the following options:

  • auto’: Here, COPY command will automatically load fields from the JSON file.
  • auto ignorecase’: COPY disregards the case of field names when loading all fields from the JSON file.
  • s3://jsonpaths_file: You can also use a JSONPaths file to parse the JSON source data. This is a text file that comprises a single JSON object named “jsonpaths” paired with an array of JSONPath expressions. 

Let’s go over how you can use each of these options to load the JSON data we got from the REST API response to Redshift. For the examples that follow, we’ll assume our file is named restapi_redshift_integration.json.

Load Data Using the ‘auto’ Option

To load JSON data using the ‘auto’ option, the JSON data needs to contain a specific set of objects. The key names should match the column names, irrespective of the order. You can run the COPY command for this option as follows:

copy category 
from 's3://mybucket/restapi_redshift_integration.json'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' 
json 'auto';
Load Data Using the ‘auto ignorecase’ Option

Here, the case of the key names doesn’t have to match the column names, irrespective of the order. To load JSON data using the ‘auto ignorecase’ option, you can use the following code snippet:

copy category 
from 's3://mybucket/restapi_redshift_integration.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' 
json 'auto ignorecase';
Load Data Using a JSONPaths File

It’s likely that your JSON data objects won’t correspond directly to column names. For such a scenario, you can use a JSONPaths file to map the JSON elements to respective columns. The order of the JSONPaths file expressions must correspond with the column order, even though the order of the JSON source data doesn’t matter.

For this example, let’s name the JSONPaths file that’ll map the source data to the table columns as integration_jsonpath.json. This is what the COPY command would look like:

copy category 
from 's3://mybucket/restapi_redshift_integration.json' 
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' 
json 's3://mybucket/integration_jsonpath.json';

The Redshift COPY command is optimized for inserting bulk data at high throughput and speed. It can also efficiently use parallel processing.

You can also move JSON data to Redshift through AWS Glue.

With that said, there are a few limitations to using the Redshift COPY command for REST API Redshift integration:

  • You can import JSON data directly to Redshift if you have a table structure that’s an exact match. However, this would need a lot of manual DDL and extensive analysis for a successful import.
  • If you’re trying to insert rows one by one into Redshift, using the COPY command might not be the best solution.
  • You need to be mindful of the implicit data type conversions that COPY executes behind the scenes. If left unchecked, this could lead to a corrupted database. The COPY command can reduce the accuracy of numeric data when loading numbers from a JSON-formatted file. That’s why some of the data copied from the JSON file might not be rounded as expected. To prevent loss of numeric precision, you can do one of the following:
    • Use ROUNDEC to round numbers instead of truncating them.
    • Represent a number as a string by enclosing the value in double quotes.
    • Use delimiters, CSV, or fixed-width text files instead of JSON or Avro files.
  • While using the COPY command, the maximum size of a single input row from any source shouldn’t exceed 4 MB.
  • The Redshift COPY command always adds data to the cluster. Since Redshift can’t enforce primary key constraints, this could lead to duplicate rows. For such cases, you can use a staging table.

Stop wasting valuable engineering time and resources manually replicating and transforming data from your sources. Creating custom connectors, transforming and processing data, tracking the flow, and fixing issues can be a huge hassle when you have multiple data sources. 

To skip over the hassle, you can…

Automate the Redshift REST API Integration Process using a No-Code Tool

Creating custom scripts for every data connector request is inefficient and costly. Furthermore, without data flow monitoring, pipeline issues and regular system failures make scaling this system nearly impossible.

You can streamline the REST API Redshift migration process by opting for an automated tool:

  • It allows your data engineers to focus on their engineering goals while letting business teams generate accurate reports quickly.
  • The user-friendly UI saves engineers time on data preparation.
  • No-code tools allow analysts of any experience level to produce complete reports for various business areas to make informed decisions. 
  • Business teams will have access to near real-time data without compromising accuracy.
  • Automated data replication tools let you gather all your analysis-ready data in one spot to measure success and explore new possibilities.

Let’s see how a cloud-based ELT tool like Hevo simplifies your workflow:

Step 1: Configure REST API as a Source

To set up REST API as a source in Hevo you just need to keep the following things in mind:

  • You should be assigned the Team Collaborator, Team Administrator, or Pipeline Administrator role in Hevo to create the pipeline.
  • You need to have working knowledge of JSONPath and JSONPath expressions. 

Provide the REST API endpoint details in the Configure your REST API Source page:

Redshift REST API: Configure REST API as a Source

Step 2: Configure Redshift as a Destination

Redshift REST API: Configure Redshift as a Destination

And that’s it! For more information on configuring this pipeline, you can read the official documentation for configuring REST API as a Source and Redshift as a Destination.

Summary

You can execute REST API to Redshift Integration in two ways — using Python to extract REST API data in JSON format, and through third-party data replication tools like Hevo Data. You need to keep in mind that each method is a good fit for different scenarios.

Hevo Data allows you to replicate data in near real-time from 150+ sources to the destination of your choice including Snowflake, BigQuery, Redshift, Databricks, and Firebolt, without writing a single line of code. We’d suggest you use this data replication tool for real-time demands like tracking the sales funnel or monitoring your email campaigns. This’ll free up your engineering bandwidth, allowing you to focus on more productive tasks.

visit our website to explore hevo

For rare times things go wrong, Hevo Data ensures zero data loss. To find the root cause of an issue, Hevo Data also lets you monitor your workflow so that you can address the issue before it derails the entire workflow. Add 24*7 customer support to the list, and you get a reliable tool that puts you at the wheel with greater visibility.

If you don’t want SaaS tools with unclear pricing that burn a hole in your pocket, opt for a tool that offers a simple, transparent Hevo Pricing model. Hevo Data has 3 usage-based pricing plans starting with a free tier, where you can ingest up to 1 million records.

Schedule a demo to see if Hevo would be a good fit for you, today!

mm
Content Marketing Manager, Hevo Data

Amit is a Content Marketing Manager at Hevo Data. He enjoys writing about SaaS products and modern data platforms, having authored over 200 articles on these subjects.

All your customer data in one place.