Webhooks allow real-time notification of events between applications. PostgreSQL is a trusted relational database. Combining the two can strengthen real-time vigilance so your system updates instantly when events occur. However, you need a reliable ETL pipeline to connect them.

This post will show you how to easily ETL Webhooks data into PostgreSQL in just minutes using Hevo. With Hevo, you can establish a Webhooks PostgreSQL connection and keep your data up-to-date without building and managing complex ETL scripts.

What is a Webhook?

Webhooks PostgreSQL | Webhooks logo | Hevo Data


A webhook is an HTTP request that is triggered by an event in a source system and delivered to a destination system, that, too, frequently with a data payload. In general, Webhooks are automated, which means they are sent out when an event occurs in the originating system.

When an event occurs, one system (the source) can “talk” (HTTP request) to another system (the destination) and communicate information (request payload) about the event.

What is PostgreSQL?

Webhooks PostgreSQL | postgresql logo | Hevo Data


PostgreSQL is a relational database management system. The database contains data points in rows, with columns representing various data properties. It distinguishes itself by emphasizing more on integrations and extensibility. PostgreSQL database is scalable since it interacts with many different technologies and adheres to numerous database standards.

How to Establish Webhooks PostgreSQL Connect?

In this section of this tutorial article, we’ll be establishing the Webhooks PostgreSQL connect using Hevo Data’s No-code Data Pipeline. Let’s all see how.

Step 1: Configuring Webhook as a Source

  • After successfully logging in to your Hevo account, select PIPELINES (by default, PIPELINES is selected in Hevo’s Asset Palette.
  • From the list of Sources, select Webhook. (In case, Webhook is not displayed click on “View All,” and check for Webhook. Webhook logo will be visible now, select it to continue.
Webhooks PostgreSQL | Step 1: configuring webhook as a source | Hevo Data
Hevo Documentation
Webhooks PostgreSQL | Step 1: configuring webhook as a source | Hevo Data
Hevo Documentation
  • A page named, “Configure your Webhook Source” will appear. Now, indicate the JSON path to the root of the Event name as well as the root of the fields in your payload.
  • Click on “CONTINUE.”
Webhooks PostgreSQL | Step 1: configuring webhook as a source | Hevo Data
Hevo Documentation

Step 2: Specifying PostgreSQL Connection Settings

Specify the following in the “Configure your PostgreSQL Source” page:

  • Pipeline Name: A unique name for your pipeline is recommended.
  • Database Host: The table listed below shows some examples of PostgreSQL hosts.
Amazon RDS PostgreSQLpostgresql-rds-1.xxxxx.rds.amazonaws.com
Azure PostgreSQLpostgres.database.azure.com
Generic PostgreSQL10.123.10.001 or postgresql.westeros.inc
Google Cloud PostgreSQL35.220.150.0
  • Database Port: The default value is 5432.
  • Database User: The user is listed as read-only, having permission to only read tables in your database.
  • Database Password: The password for your database for the read-only user.
  • Select Ingestion Mode: Read Ingestion Modes to know more.
  • Connection settings:
    • Connecting through SSH: Enabling this option allows you to connect to Hevo via an SSH tunnel rather than directly connecting your PostgreSQL database server to Hevo. This adds an extra layer of protection to your database by not exposing your PostgreSQL configuration to the public.
    • Use SSL: Allow it to utilize an SSL-encrypted connection. If you’re utilizing Heroku PostgreSQL databases, you also need to activate this. To activate it, enter the following:
      • CA File: This is the file that contains the SSL server certificate authority (CA).
      • Client Certificate: The public key certificate file for the client.
      • Client Key: The file containing the client’s private key.
  • Advanced Settings:
    • Load Historical Data: This option is only available for Pipelines in Logical Replication mode. If this option is enabled, the complete table data is retrieved during the Pipeline’s initial run. If this option is turned off, Hevo will only load data that was written in your database after the Pipeline was created.
    • Merge Tables: For Pipelines in Logical Replication mode. When you choose this option, Hevo combines tables with the same name from separate databases while loading data into the warehouse. With each record, Hevo populates the Database Name column. If this option is turned off, the database name is prefixed to each table name. See How Does the Merge Tables Feature Work? for more information.
    • Add New Tables to the Pipeline: Except for Custom SQL, this applies to all Ingestion modes.

2. To proceed with the Webhooks PostgreSQL connection, click on “Test & Continue.”

Webhooks PostgreSQL | Step 2: configuring PostgreSQL as a destination | Hevo Data
Hevo Documentation

Step 3: Setting up Webhook

Copy and paste the Webhook URL obtained in Step 2 above into the application from which you wish to send events to Hevo.

Step 4: Few Final Settings

Optionally, as part of the final settings, you can configure Transformations to purify or enhance the Source data in any way. You may also use the Schema Mapper to inspect and update the Source to Destination field mapping.

Your first pipeline with the Webhooks PostgreSQL Connect has been set up, and data intake has begun.

DB Webhooks

A Postgres add-on called DB Webhooks sets off webhooks in response to new, modified, or removed records. It makes use of database triggers to communicate with a Go programme via low-latency websocket communications. Next, this application sends a JSON payload containing predefined values from the database record to the configured webhook(s).

How It Works

  • Data is modified in a Postgres table (INSERT, UPDATE, DELETE)
  • A WebSocket message from a Postgres trigger alerts the DB Webhooks web server.
  • Data is formatted, filtered, and sent to defined webhooks using DB Webhooks.

Run DB Webhooks locally

You can run DB Webhooks locally with Docker.

git clone --depth 1 https://github.com/tableflowhq/db-webhooks.git

cd db-webhooks

docker-compose up -d

Then open http://localhost:3000 to access DB Webhooks.

Run DB Webhooks on AWS (EC2)

Option 1 (one-line install)

sudo yum update -y && \
sudo yum install -y docker && \
sudo service docker start && \
sudo usermod -a -G docker $USER && \
sudo curl -L "https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && \
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose && \
sudo chmod +x /usr/bin/docker-compose && \
mkdir db-webhooks && cd db-webhooks && \
wget https://raw.githubusercontent.com/tableflowhq/db-webhooks/main/{.env,docker-compose.yml,.dockerignore,frontend.env} && \
sg docker -c 'docker-compose up -d'

Option 2 (guided install)

  1. To install Docker, run the following command in your SSH session on the instance terminal:
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker $USER
logout # Needed to close the SSH session so Docker does not have to be run as root
  1. To install docker-compose, run the following command in your ssh session on the instance terminal:
sudo curl -L "https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-$(uname -s)-$(uname -m)"  -o /usr/local/bin/docker-compose
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
docker-compose version
  1. Install and run DB Webhooks
mkdir db-webhooks && cd db-webhooks
wget https://raw.githubusercontent.com/tableflowhq/db-webhooks/main/{.env,docker-compose.yml,.dockerignore,frontend.env}
docker-compose up -d

Importance of Webhooks PostgreSQL Integration?

Webhooks PostgreSQL connect can be advantageous. It can help you store information almost immediately and later be used for analysis fetching the much-needed business advantage. Using the Webhooks PostgreSQL connect, you not just save information from tables and columns but also save index and data types, which is a valuable attribute when it comes to Webhooks PostgreSQL ETL needs. With this fusion, you’ll be benefited in the following ways:

  • Flexibility when it comes to choosing Data Types: All data types such as documents, primitives, geometry, structures, etc. are supported by PostgreSQL Database, making it super reliable for when big transitions are taking place in terms of database migration.
  • PostgreSQL’s Data Integrity really comes in handy: PostgreSQL ensures data integrity by imposing limits and restricting the data you contribute. You can forget about invalid or orphan records when you use PostgreSQL.
  • ETL Pipeline’s high-performance rate with little to no data latency issues: In its latest update, the feature list of  PostgreSQL has increased. The updates have been focused on boosting and optimizing the performance especially. So, nothing to worry about in that aspect.
  • Internationalization & Text Search: PostgreSQL supports international character sets for internationalization and text search. It also supports full-text search to expedite the search process and incorporates case-insensitive and accent-insensitive collations.


In this blog post, we have learned how to build and establish Webhooks PostgreSQL connect via Hevo’s no-code data pipeline.

The established connection will let your Webhooks data flow to PostgreSQL error-free and with no delays. Further, if you want to know in detail about how to create a Hevo data pipeline, either of the two documentation links can help a great deal:

  1. PostgreSQL as a Destination
  2. Creating a WebHook Pipeline
Yash Arora
Content Manager, Hevo Data

Yash is a Content Marketing professional with over three years of experience in data-driven marketing campaigns. He has expertise in strategic thinking, integrated marketing, and customer acquisition. Through comprehensive marketing communications and innovative digital strategies, he has driven growth for startups and established brands.

No Code Data Pipeline For PostgreSQL