In eCommerce platforms, the customer journey is complicated. They use a variety of devices, including smartphones and computers, to browse numerous websites, review platforms, and research things before making a purchase. If you want to target a specific group of people, use marketing solutions like Outbrain, that leverage emails, SMS, search engine ads, and remarketing to efficiently target potential buyers. You can learn more about product demand trends by looking at ad impressions, click-through rates, conversion rates, and search histories on marketing platforms.

In this article, you will learn how to Integrate Outbrain to Redshift. You will also learn about Outbrain and Redshift and their key features. 

What is Amazon Redshift?

Amazon Redshift is a fully managed petabyte-scale Data Warehousing service that is hosted in the cloud, which means that administrative tasks like backup creation, security, and configuration are all automated. You can start with a few gigabytes of data and scale up to a petabyte or more with it. Amazon Redshift divides data into clusters that may be analyzed simultaneously. As a result, Amazon Redshift data may be rapidly and simply accessible. Each node can be accessed individually by users and apps.

A range of SQL-based clients, as well as a variety of Data Sources and Data Analytics tools, can be used with Amazon Redshift. It has a well-designed architecture that makes working with a number of Business Intelligence tools easier.

Key Features of Amazon Redshift

Here are some key features of Amazon Redshift:

  • Column-oriented Databases: Data can be grouped into rows or columns in a database. A high percentage of OLTP databases are row-orientation databases. To put it another way, these systems are designed to handle a large number of small activities like DELETE, UPDATE, and so on. A column-oriented database like Amazon Redshift is the way to go when it comes to quickly access enormous volumes of data. Amazon Redshift is mostly used for OLAP operations and SELECT operations have been improved.
  • Secure End-to-End Data Encryption: Data privacy and security requirements apply to all businesses and organizations, and encryption is one of the most critical parts of data protection. For data in transit, Amazon Redshift uses SSL encryption, and for data at rest, it uses hardware-accelerated AES-256 encryption. All data stored on the disc, as well as any backup files, is encrypted. You won’t have to bother about key management because Amazon will handle everything.
  • Massively Parallel Processing (MPP): MPP is a distributed design technique that uses a “divide and conquer” strategy across numerous processors to process huge data collections. A huge processing effort is divided into smaller tasks and dispersed among several compute nodes. The compute node processors work in parallel rather than sequentially to finish their calculations.
  • Cost-effective: The most cost-effective Cloud Data Warehousing option is Amazon Redshift. Traditional on-premise warehousing is projected to be a tenth of the expense. There are no hidden expenses for consumers; they simply pay for the services they use. On the Amazon Redshift official website, you can learn more about pricing.
  • Scalability: Amazon’s Redshift is simple to use and scales to meet your demands. You can modify the number or type of nodes in your Data Warehouse with a few clicks or a simple API call, and scale up or down as needed.

Why Integrate Outbrain to Redshift?

The most challenging aspect of Outbrain’s marketing efforts for marketers is the money wasted on redundant adverts. Take, for example, out-of-stock product advertisements, which waste a lot of money. This challenge is solved by feeding Outbrain with vital data from other platforms. The lack of particular data is a significant reason why your Outbrain marketing campaigns aren’t generating more income.

All of this information cannot be sent to Outbrain in its native form. Collect relevant data and properly analyze it in a data warehouse before using the information to launch Outbrain marketing campaigns. Amazon Redshift aids in the extraction of useful information from enormous amounts of data. You can build a new cluster in a few minutes using AWS’s simple interface, and you won’t have to worry about managing infrastructure.

Explore These Methods to Connect Outbrain and Redshift

Outbrain is the world’s first and only discovery platform dedicated solely to anticipating situations and drawing data-driven links between interests and actions. Also, Amazon Redshift is a Data Warehouse known for ingesting data instantaneously and performing almost real-time analysis. When integrated together, moving data from Outbrain to Redshift could solve some of the biggest data problems for businesses. Here are two methods to achieve this:

Method 1: Outbrain to Redshift Integration Using Hevo

Hevo Data, an Automated Data Pipeline, provides you with a hassle-free solution to connect Outbrain to Redshift within minutes with an easy-to-use no-code interface. Hevo is fully managed and completely automates the process of not only loading data from Outbrain but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code.


Method 2: Outbrain to Redshift Integration Manually

This method would be time-consuming and somewhat tedious to implement. Users will have to write custom codes to enable two processes, streaming data from Outbrain and ingesting data into Amazon Redshift. This method is suitable for users with a technical background.

Both the methods are explained below.

Outbrain to Redshift Integration Process

In this section, you will learn how to integrate Outbrain to Redshift. There are two ways to go about integrating Outbrain to Redshift. 

Method 1: Outbrain to Redshift Integration Using Hevo

Outbrain to Redshift: hevo banner
Image Source

Hevo provides Amazon Redshift as a Destination for loading/transferring data from any Source system, which also includes Outbrain. You can refer to Hevo’s documentation for Permissions, User Authentication, and Prerequisites for Amazon Redshift as a destination here

Here are the steps you can take to connect data from Outbrain to Redshift:

Configure Outbrain as a Source

Configure Outbrain as the Source in your Pipeline by following the instructions below to perform Outbrain to Redshift Integration:

  • Step 1: In the Asset Palette, choose PIPELINES.
  • Step 2: In the Pipelines List View, click + CREATE.
  • Step 3: Select Outbrain on the Select Source Type page.
  • Step 4: Set the following in the Configure your Outbrain Source page:
Outbrain to Redshift: outbrain as source
Image Source
  • Pipeline Name: A name for the pipeline that is unique and does not exceed 255 characters.
  • Username: The username included in the Outbrain access credentials.
  • Password: The password found in the Outbrain access credentials.
  • Historical Sync Duration: The amount of time that the previous data must be consumed.
  • Step 5: TEST & CONTINUE should be selected.
  • Step 6: Proceed to setting up the Destination and configuring the data ingestion.

Configure Amazon Redshift as a Destination

To set up Amazon Redshift as a destination in Hevo for Outbrain to Redshift Connection, follow these steps:

  • Step 1: The first step in Outbrain to Redshift Integration is to go to Asset Palette and select DESTINATIONS.
  • Step 2: In the Destinations List View, click + CREATE.
  • Step 3: Select Amazon Redshift from the Add Destination page.
  • Step 4: Set the following parameters on the Configure your Amazon Redshift Destination page for Outbrain to Redshift Integration:
Outbrain to Redshift:  redshift as destination
Image Source
  • Destination Name: Give your destination a unique name.
  • Database Cluster Identifier: The IP address or DNS of the Amazon Redshift host is used as the database cluster identifier.
  • Database Port: The port on which your Amazon Redshift server listens for connections is known as the database port. 5439 is the default value.
  • Database User: In the Redshift database, a user with a non-administrative position.
  • Database Password: The user’s password.
  • Database Name: The name of the destination database into which the data will be loaded.
  • Database Schema: The Destination database schema’s name. The default setting is public.
  • Step 5: To test connectivity with the Amazon Redshift warehouse, click Test Connection.
  • Step 6: When the test is complete, select SAVE DESTINATION to complete Outbrain to Redshift Integration.

Method 2: Outbrain to Redshift Integration Manually

This method manually integrates Outbrain to Redshift:

Step 1: Get Data out of Outbrain

The first step in Outbrain to Redshift Integration is to get the data. The RESTful Amplify API from Outbrain allows you to extract data about marketers, campaigns, performance, and more. With a call like GET/reports/marketers/[id]/content, you can create an API call that specifies performance metrics like impressions, clicks, clickthrough rate, and cost. To limit, filter, and sort the results, you can use any of a dozen alternative options.

Step 2: Sample the Data

The following is an example of a JSON response from an API query for performance data:

    "results": [
                "id": "00f4b02153ee75f3c9dc4fc128ab041962",
                "text": "Yet another promoted link",
                "creationTime": "2017-11-26",
                "lastModified": "2017-11-26",
                "url": "",
                "status": "APPROVED",
                "enabled": true,
                "cachedImageUrl": "",
                "campaignId": "abf4b02153ee75f3cadc4fc128ab0419ab",
                "campaignName": "Boost 'ABC' Brand",
                "archived": false,
                "documentLanguage": "EN",
                "sectionName": "Economics",
                "impressions": 18479333,
                "clicks": 58659,
                "conversions": 12,
                "spend": 9187.16,
                "ecpc": 0.16,
                "ctr": 0.32,
                "conversionRate": 0.02,
                "cpa": 765.6
    "totalResults": 27830,
    "summary": {
        "impressions": 1177363701,
        "clicks": 2615150,
        "conversions": 2155,
        "spend": 455013.97,
        "ecpc": 0.17,
        "ctr": 0.22,
        "conversionRate": 0.08,
        "cpa": 211.14
    "totalFilteredResults": 1,
    "summaryFiltered": {
        "impressions": 18479333,
        "clicks": 58659,
        "conversions": 12,
        "spend": 9187.16,
        "ecpc": 0.16,
        "ctr": 0.32,
        "conversionRate": 0.02,
        "cpa": 765.6

Step 3: Prepare the Data

You’ll need to construct a schema for your data tables if you don’t already have a data structure in which to store the data you obtain in Outbrain to Redshift Integration. Then, for each value in the response, you must identify a predefined datatype (INTEGER, DATETIME, etc.) and create a table to receive it. The documentation for each endpoint should inform you what fields are available and what data types they relate to.

The fact that the records fetched from the source may not always be “flat” – some of the objects may really be lists – further complicates things. This means you’ll almost certainly need to create more tables to account for the unpredictability of cardinality in each record.

Step 4: Load data into Redshift

Use the CREATE TABLE statement in the Redshift data warehouse to set up a table to receive all of the data after you know all of the columns you want to put.

After that, you must move your data. Building INSERT statements to add data to your Redshift table row by row may appear to be the simplest option. Redshift isn’t designed for inserting data one row at a time, therefore that would be a mistake. If you have a large amount of data to input, it’s better to transfer it to Amazon S3 and then load it into Redshift with the COPY command.

Step 5: Keeping Outbrain Data Up to Date

You’ve successfully created a script or written a program to gather the data you want and moved it into your data warehouse at this point. But how are you going to get new or updated data into the system? It’s not a good idea to duplicate all of your data every time your records are updated. This would be a terribly slow and resource-intensive procedure.

Instead, select important fields that your script may use to save its progress through the data and return to them as it searches for updated data. It’s recommended to use auto-incrementing fields like updated at or created at for this. You can set up your script as a cron job or a continuous loop to acquire fresh data as it occurs in Outbrain once you’ve added this functionality.

And, as with any code, you must maintain it once you’ve written it. You may need to change the script if Outbrain changes its API, or if the API sends a field with a datatype your code doesn’t understand. You will undoubtedly have to if your users require somewhat different information.


In this article, you learned two methods of Integrating Outbrain to Redshift. The first method was using Hevo and the second was manually transferring data. You also learned why you need to integrate Outbrain to Redshift. 

However, as a Developer, extracting complex data from a diverse set of data sources like Databases, CRMs, Project management Tools, Streaming Services, and Marketing Platforms like Outbrain to your Database can seem to be quite challenging. If you are from non-technical background or are new in the game of data warehouse and analytics, Hevo Data can help!

Visit our Website to Explore Hevo

Hevo Data will automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. This platform allows you to transfer data from 100+ data sources to Cloud-based Data Warehouses like Snowflake, Google BigQuery, Amazon Redshift, etc. It will provide you with a hassle-free experience and make your work life much easier.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Sharon Rithika
Content Writer, Hevo Data

Sharon is a data science enthusiast with a hands-on approach to data integration and infrastructure. She leverages her technical background in computer science and her experience as a Marketing Content Analyst at Hevo Data to create informative content that bridges the gap between technical concepts and practical applications. Sharon's passion lies in using data to solve real-world problems and empower others with data literacy.

No-Code Data Pipeline for Amazon Redshift