Are you seeking quick and easy ways to connect an intercom webhook to Redshift? You’ve come to the right place.

When an event happens, webhooks offer real-time information transmission from one application to another. These events serve as a trigger to deliver alerts to destinations through SMS, which is much faster than traditional techniques such as polling.

What is Webhook?

Webhooks Logo, Intercom webhook to redshift | Hevo Data
Image Source

A webhook is an API concept that is gaining traction. Webhooks are becoming increasingly useful as more and more of what we do on the web can be defined by events. They’re quite handy and a low-cost approach to building event reactions.

A webhook (also known as a web callback or HTTP push API) is a means for a program to provide real-time data to other apps. A webhook sends data to other apps in real-time, so you get it right away. Unlike traditional APIs, you won’t have to poll for data very often to receive real-time results. Webhooks become significantly more efficient for both the supplier and the customer as a result of this. The sole disadvantage of webhooks is the complexity of putting them up at first.

Webhooks are sometimes known as “Reverse APIs,” since they provide you with an API standard and need you to create an API for the webhook to utilize. The webhook will send an HTTP request (usually a POST) to your app, and you will be responsible for deciphering it.

Solve your data replication problems with Hevo’s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

What is Amazon Redshift?

Redshift Logo, Intercom webhook to redshift | Hevo Data
Image Source

The AWS Data Warehousing solution Amazon Redshift enables business analytics in the AWS cloud. Customers may use typical SQL queries to query petabytes of structured and semi-structured data in Redshift.

AWS users may start developing a Redshift Data Warehouse for as cheap as $0.25 per hour and scale it up to meet their business needs. Redshift may be deployed as a single 160GB node or as a multi-node clustered system with a ‘Leader’ node that controls client connections and receives queries in front of up to 128 Compute Nodes that store data and run queries.

Redshift employs powerful compression technology, compressing individual database columns to achieve considerable compression as compared to typical relational database storage. As a result, data saved in Redshift requires less storage space than data stored in computing systems.

Redshift makes use of ‘Massively Parallel Processing (MPP) technology, which dynamically distributes data and queries workloads overall compute nodes, allowing Redshift to perform complicated queries across massive datasets rapidly and effectively. 

What are the Key Features of Amazon Redshift?

  • Faster Performance: Amazon Redshift generates high throughputs and sub-second reaction times by leveraging machine learning, parallel architecture, and compute-optimized hardware. With Amazon Redshift, there is less time spent waiting and more time spent generating insights from the examined data.
  • Easy Set-Up: Basic Setup, Deployment, and Management: Amazon Redshift is by far the most simple and user-friendly data warehouse, allowing users to install a new data warehouse in a matter of minutes. Some of the most frequent administrative duties for managing, monitoring, and scaling the data warehouse may be readily automated using Amazon Redshift. This liberates users from the complexities of on-premise data warehouse management.
  • Scalable: Amazon Redshift can scale queries from gigabytes to exabytes of data across the user’s database and Amazon S3 data lake, allowing users to swiftly analyze any quantity of data in S3 without any loading or Extract-Transform-Load (ETL) technique. It allows you to simply resize the Redshift cluster with a few terminal clicks or a single API request. The user may adjust the Redshift up and down based on their needs.
  • Short-Query Acceleration: One of Amazon Redshift’s advantages is that users may expand their data warehouse to a data lake. This allows users to receive detailed insight into data that would otherwise be impossible to obtain by querying independent data silos. Users of Redshift Spectrum may query open data types stored in Amazon S3 directly. This one-of-a-kind functionality allows Redshift to query data without requiring additional data migration. This allows users to study data from both the data warehouse and the data lake using a single service.
What Makes Hevo’s ETL Process Best-In-Class

Providing a high-quality ETL solution can be a difficult task if you have a large volume of data. Hevo’s automated, No-code platform empowers you with everything you need to have for a smooth data replication experience.

Check out what makes Hevo amazing:

  • Fully Managed: Hevo requires no management and maintenance as it is a fully automated platform.
  • Data Transformation: Hevo provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Faster Insight Generation: Hevo offers near real-time data replication so you have access to real-time insight generation and faster decision making. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  • Scalable Infrastructure: Hevo has in-built integrations for 100+ sources (with 40+ free sources) that can help you scale your data infrastructure as required.
  • Live Support: Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
Sign up here for a 14-day free trial!

Methods to Integrate Intercom Webhook to Redshift:

There are 2 easy ways to Integrate Intercom Webhook to Redshift.

Method 1 to Integrate Intercom Webhook to Redshift:

This method to integrate intercom webhook to redshift requires coding knowledge and an understanding of REST API.

1. Use the Intercom REST API to get data.

Pulling data from Intercom is commonly used to bring all of your users together, along with all of the discussions you’ve had with each of them. After that, you may import this data into your data warehouse and improve your analytic skills by including extra interactions with them. To do so, you’ll need to first gather all of your users, which you can accomplish with CURL in the following way:

curl https://api.intercom.io/users
-u pi3243fa:da39a3ee5e6b4b0d3255bfef95601890afd80709
-H 'Accept: application/json'

The following is an example of a typical outcome:

{
 "type": "user.list",
 "total_count": 105,
 "users": [
   {
     "type": "user",
     "id": "530370b477ad7120001d",
      ...
    },
    ...
  ],
 "pages": {
   "next": "https://api.intercom.io/users?per_page=50&page=2",
   "page": 1,
   "per_page": 50,
   "total_pages": 3
 }
}

We can now get a comprehensive list of all the interactions that have taken place on Intercom by running the following command:

$ curl
https://api.intercom.io/conversations?type=admin&admin_id=25&open=true
-u pi3243fa:da39a3ee5e6b4b0d3255bfef95601890afd80709
-H 'Accept:application/json'

and the following is an example of a typical outcome:

{
 "type": "conversation.list",
 "conversations": [
   {
     "type": "conversation",
     "id": "147",
     "created_at": 1400850973,
     "updated_at": 1400857494,
     "user": {
       "type": "user",
       "id": "536e564f316c83104c000020"
     },
     "assignee": {
       "type": "admin",
       "id": "25"
     },
     "conversation_message": {
       "type": "conversation_message",
       "subject": "",
       "body": "<p>Hi Alice,</p>nn<p>We noticed you using our Product, do you have any questions?</p> n<p>- Jane</p>",
       "author": {
         "type": "admin",
         "id": "25"
       },
       "attachments": [
         {
           "name": "signature",
           "url": "http://someurl.com/signature.jpg"
         }
       ]
     }
   }
 ]
}

As can be seen, each conversation has a user object with an id, allowing us to correlate the talks with the people we collected before. Of course, in order to do so on our Data warehouse repository, we must translate the aforementioned structures to the repository’s data model, taking into account both the schema and the data types.

Then, by following the methods below, we can construct a pipeline that will extract the data, transform it into the model of our repository, and load the data. Of course, if the Intercom API changes, the pipeline will fail, and we’ll have to deal with it.

2. Get Your Intercom Data Ready For Intercom Webhook to Redshift
Amazon Redshift is based on industry-standard SQL and includes features for managing very big datasets and doing high-performance analysis. To put your data into it, you’ll need to follow its data model, which is a standard relational database model. Tables and columns should be created using the data you collect from your data source.

Where the table acts as a map to the resource you wish to store, and the columns represent the resource’s qualities. Also, each property should comply with the data types that Redshift presently supports; the data types that are now supported are:

  • SMALLINT
  • INTEGER
  • BIGINT
  • DECIMAL
  • REAL
  • DOUBLE PRECISION
  • BOOLEAN
  • CHAR
  • VARCHAR
  • DATE
  • TIMESTAMP

Because your data is likely coming in a format like JSON, which supports a much restricted set of data types, you must be cautious about what data you feed into Redshift and ensure that your types are mapped to one of the datatypes that Redshift supports.

Designing a schema for Redshift and mapping data from your data source to it is a process that you should take care of since it might affect your cluster’s performance as well as the questions you can answer. It’s always a good idea to keep in mind the best practices for designing a Redshift database that Amazon has released.

3. Load data from Intercom Webhook to Redshift

To get your data from Intercom Webhook to Redshift, you’ll need to place it in a source that Redshift can access. As previously stated, Amazon S3, Amazon DynamoDB, and Amazon Kinesis Firehose are the three major data sources supported, with Firehose being the most recent addition as a mechanism to enter data into Redshift.

To upload your data to S3, you’ll need to utilize the AWS REST API; as we’ve seen before, APIs are critical for both data extraction and loading into our data warehouse. The first step is to create a bucket, which you can accomplish by using the Amazon AWS REST API endpoints for S3 to conduct an HTTP PUT.

This may be done with a tool like CURL or Postman. Alternatively, you can utilize Amazon’s libraries for your preferred language.

After you’ve built your bucket, you can begin delivering data to Amazon S3 using the same AWS REST API, but this time using the Object operations endpoints. You may either access the HTTP endpoints directly or utilize the library of your choice, just as with Bucket.

DynamoDB imports data from S3, which adds a step between S3 and Amazon Redshift that you can skip if you don’t need it for other reasons.

The Amazon Kinesis Firehose is the newest technique to input data into Redshift, and it provides a real-time streaming approach to data entry. The following are the steps to adding data to Redshift using Kinesis Firehose:

  • build a delivery channel
  • add information to the stream

Kinesis handles pushing new data to S3 or Redshift whenever you add new data to the stream. Going via S3 is wasteful in this situation if your aim is to send your data to Redshift. As in the previous two situations, the preceding two stages may be completed using either the REST API or your chosen library. The distinction is that you’ll use a Kinesis Agent to put your data into the stream.

There are two ways to put data into Amazon Redshift. The first method is to use the INSERT command. You may use your client to connect to your Amazon Redshift instance using either a JDBC or ODBC connection, and then run an INSERT command to insert your data.

insert into category_stage values
(12, 'Concerts', 'Comedy', 'All stand-up comedy performances');

The INSERT command is used in the same way as it is with any other SQL database; for additional details, see the Amazon Redshift documentation’s INSERT examples page.

Redshift is not meant for INSERT-like operations; rather, bulk uploads using the COPY command are the most efficient means of importing data into it. You may use the COPY command to copy data from S3 flat files or an Amazon DynamoDB table. When you use COPY commands, Redshift may read many files at once and spread the burden among the cluster nodes, allowing the load to be processed in parallel.

COPY is a very versatile command that can be used in a variety of ways depending on your needs. The command to perform a COPY on Amazon S3 is as simple as this:

copy listing
from 's3://mybucket/data/listing/'
credentials 'aws_access_key_id=;aws_secret_access_key=';

Check out the COPY examples page in the Amazon Redshift documentation for further examples of how to use the COPY command. As with the INSERT command, you must first connect to your Amazon Redshift instance using a JDBC or ODBC connection, and then use the SQL Reference from the Amazon Redshift documentation to run the commands you desire.

Method 2 – Integrate Intercom Webhook to Redshift: Using Hevo Data

Hevo Data, a No-code Data Pipeline can help you seamlessly integrate data from Intercom Webhook to Redshift. It is a reliable and secure service that doesn’t require you to write any code!  

Prerequisite

Step 1 Configure Source

  • Go to your Hevo account and sign in. PIPELINES is chosen by default in the Asset Palette.
  • From the list of Sources, choose Webhook. The list is based on the Sources you choose when you first set up your Hevo account.
Configure source, Intercom webhook to redshift
Image Source

If Webhook isn’t shown in the list, go to View All, search for it, and then pick it on the Select Source Type page.

Select Destination, Intercom webhook to redshift | Hevo Data
Image source
  • Set the JSON path to the root of the Event name and the root of the fields in your payload on the Configure your Webhook Source page. Writing JSONPath Expressions is a good place to start.

    Note that based on the Destination type, the fields may differ. Sources should be read.
Configure webhook source, Intercom webhook to redshift | Hevo Data
Image Source
  • Hit CONTINUE.

Step 2 Select and Configure Destination

  • Select your Destination on the Select Destination Type page. In this case, we’ll use Amazon Redshift.
Select Destination Type, Intercom webhook to redshift | Hevo Data
Image Source
  • To establish your Amazon Redshift Destination, go to the Configure your Amazon Redshift Destination page and enter the Amazon Redshift settings.

Note that based on the type of Destination you wish to configure, the fields may differ. Read the book Destinations.

Configure Redshift, Intercom webhook to redshift | Hevo Data
Image Source
  • SAVE & CONTINUE
  • If you wish to change the name of the Destination table or partition, type a prefix in the Destination Table Prefix box; otherwise, leave it blank.
  • Continue by clicking CONTINUE.

A Webhook URL gets generated along with the sample payload. Read Webhook.

Step 3 Set up Webhook

  • In the application from which you wish to push events to Hevo, copy and paste the created Webhook URL from Step 2 above. The example snippets can be used to test the Webhook URL connection to Hevo.
Set-Up Webhook, Intercom webhook to redshift | Hevo Data
Image Source

Step 4: Make final adjustments to data from Intercom Webhook to Redshift

  • You may also use Transformations to clean up the Source data or enhance it in any other way as part of the final settings. You may also use the Schema Mapper to see and update the Intercom Webhook to Redshift field mapping.

The data ingestion process begins once you establish your first pipeline to transfer data from Intercom Webhook to Redshift.

Intercom Webhook to Redshift: Conclusion

In this article, you got an overview of Webhook and Amazon Redshift. Following that, you learned about the integration of intercom webhook to redshift using two easy methods. 

Extracting complex data from a diverse set of data sources to carry out an insightful analysis can be a challenging task and this is where Hevo saves the day! Hevo Data, a No-code Data Pipeline can seamlessly transfer data from a vast sea of 100+ sources to a Data Warehouse or a Destination of your choice. It is a reliable, completely automated, and secure service that doesn’t require you to write any code!  

Visit our Website to Explore Hevo

Want to take Hevo for a ride?

Sign Up for a 14-day free trial and simplify your Data Integration process. Do check out the pricing details to understand which plan fulfills all your business needs.

Please share your thoughts on the Intercom Webhook to Redshift Connection in the comments section below!

Akshaan Sehgal
Former Marketing Content Analyst, Hevo Data

Akshaan is a data science enthusiast who loves to embrace challenges associated with maintaining and exploiting growing data stores. He has a flair for writing in-depth articles on data science where he incorporates his experience in hands-on training and guided participation in effective data management tasks.

No-code Data Pipeline for your Data Warehouse