Even though Salesforce provides an analytics suite along with its offerings, most organizations will need to combine their customer data from Salesforce to data elements from various internal and external sources for decision making. This can only be done by importing Salesforce data into a data warehouse or database.

The Salesforce Postgres integration is a powerful way to store and manage your data in an effective manner. Other than this, Salesforce Postgres sync is another way to store and manage data by extracting and transforming it. In this post, we will look at the steps involved in loading data from Salesforce to PostgreSQL.

Methods to Connect Salesforce to PostgreSQL

Here are the methods you can use to set up a connection from Salesforce to PostgreSQL in a seamless fashion as you will see in the sections below.

Reliably integrate data with Hevo’s Fully Automated No Code Data Pipeline

Given how fast API endpoints etc can change, creating and managing these pipelines can be a soul-sucking exercise. Hevo’s no-code data pipeline platform lets you connect over 150+ sources in a matter of minutes to deliver data in near real-time to your warehouse. It also has in-built transformation capabilities and an intuitive UI.

All of this combined with transparent pricing and 24×7 support makes us the most loved data pipeline software in terms of user reviews. Take our 14-day free trial to experience a better way to manage data pipelines.

Get started for Free with Hevo!

Method 1: Using Hevo Data to Connect Salesforce to PostgreSQL

Salesforce to postgresql- Hevo Logo
Image Source

An easier way to accomplish the same result is to use a code-free data pipeline platform like Hevo Data that can implement sync in a couple of clicks. 

Hevo does all heavy lifting and masks all the data migration complexities to securely and reliably deliver the data from Salesforce into your PostgreSQL database in real-time and for free. By providing analysis-ready data in PostgreSQL, Hevo helps you stop worrying about your data and start uncovering insights in real time.

Sign up here for a 14-day Free Trial!

With Hevo, you could move data from Salesforce to PostgreSQL in just 2 steps:

  • Step 1: Connect Hevo to Salesforce by entering the Pipeline Name.
Salesforce to postgresql- Salesforce Source Config
Image Source
  • Step 2: Load data from Salesforce to PostgreSQL by providing your Postgresql databases credentials like Database Host, Port, Username, Password, Schema, and Name along with the destination name.
Salesforce to PostgreSQL- PostgreSQL Destination Config
Image Source

Check out what makes Hevo amazing:

  • Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer. 
  • Schema Management: Hevo can automatically detect the schema of the incoming data and maps it to the destination schema.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.

Method 2: Using Custom ETL Scripts to Connect Salesforce to PostgreSQL

The best way to interact with Salesforce is to use the different APIs provided by Salesforce itself. It also provides some utilities to deal with the data. You can use these APIs for Salesforce PostgreSQL integration. The following section attempts to provide an overview of these APIs and utilities. 

  • Salesforce REST APIs: Salesforce REST APIs are a set of web services that help to insert/delete, update and query Salesforce objects. To implement a custom application using Salesforce in mobile or web ecosystem, these REST APIs are the preferred method. 
  • Salesforce SOAP APIs: SOAP APIs can establish formal contracts of API behaviour through the use of WSDL. Typically Salesforce SOAP APIs are when there is a requirement for stateful APS or in case of strict transactional reliability requirement. SOAP APIs are also sometimes used when the organization’s legacy applications mandate the protocol to be SOAP.  
  • Salesforce BULK APIs: Salesforce BULK APIs are optimized for dealing with a large amount of data ranging up to GBs. These APIs can run in a batch mode and can work asynchronously. They provide facilities for checking the status of batch runs and retrieving the results as large text files. BULK APIs can insert, update, delete or query records just like the other two types of APIs.
    • Salesforce Bulk APIs have two versions – Bulk API and Bulk API 2.0. Bulk API 2.0 is a new and improved version of Bulk API, which includes its own interface. Both are still available to use having their own set of limits and features.
    • Both Salesforce Bulk APIs are based on REST principles. They are optimized for working with large sets of data. Any data operation that includes more than 2,000 records is suitable for Bulk API 2.0 to successfully prepare, execute, and manage an asynchronous workflow that uses the Bulk framework. Jobs with less than 2,000 records should involve “bulkified” synchronous calls in REST (for example, Composite) or SOAP.
    • Using Bulk API 2.0 or Bulk API requires basic knowledge of software development, web services, and the Salesforce user interface. Because both Bulk APIs are asynchronous, Salesforce doesn’t guarantee a service level agreement.
  • Salesforce Data Loader: Data Loader is a Salesforce utility that can be installed on the desktop computer. It has functionalities to query and export the data to CSV files. Internally this is accomplished using the bulk APIs. 
  • Salesforce Sandbox: A Salesforce Sandbox is a test environment that provides a way to copy and create metadata from your production instance. It is a separate environment where you can test with data (Salesforce records), including Accounts, Contacts, and Leads.It is one of the best practices to configure and test in a sandbox prior to making any live changes. This ensures that any development does not create disruptions in your live environment and is rolled out after it has been thoroughly tested. The data that is available to you is dependent on the sandbox type. There are multiple types, and each has different considerations. Some sandbox types support or require a sandbox template.
  • Salesforce Production: The production Environment in Salesforce is another type of environment available for storing the most recent data used actively for running your business. Many of the production environments in use today are Salesforce CRM customers that purchased group, professional, enterprise, or unlimited editions. Using the production environment in Salesforce offers several significant benefits, as it serves as the primary workspace for live business operations.

Here are the steps involved in using Custom ETL Scripts to connect Salesforce to PostgreSQL:

Step 1: Log In to Salesforce

Login to Salesforce using the SOAP API and get the session id. For logging in first create an XML file named login.txt in the below format.

<?xml version="1.0" encoding="utf-8" ?>

<env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema"

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">

<env:Body>

<n1:login xmlns:n1="urn:partner.soap.sforce.com">

<n1:username>username</n1:username>

<n1:password>password</n1:password>

</n1:login>

</env:Body>

</env:Envelope>

Execute the below command to login 

curl https://login.Salesforce.com/services/Soap/u/47.0 -H "Content-Type: text/xml; charset=UTF-8" -H 
"SOAPAction: login" -d @login.txt

From the result XML, note the session id. We will need the session id for the later requests. 

Step 2: Create a Bulk API Job

Create a BULK API job. For creating a job, a text file with details of the objects that are to be accessed is needed. Create the text file using the below template.

<?xml version="1.0" encoding="UTF-8"?>

<jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload">

<operation>insert</operation>

<object>Contact</object>

<contentType>CSV</contentType>

</jobInfo>

We are attempting to pull data from the object Contact in this exercise.            

Execute the below command after creating the job.txt

curl https://instance.Salesforce.com/services/async/47.0/job -H "X-SFDC-Session: sessionId" -H 
"Content-Type: application/xml; charset=UTF-8" -d @job.txt

From the result, note the job id. This job-id will be used to form the URL for subsequent requests. Please note the URL will change according to the URL of the user’s Salesforce organization.

Step 3: Create SQL Query to Pull Data

Create the SQL query to pull the data and use it with CURL as given below. 

curl https://instance_name—api.Salesforce.com/services/async/APIversion/job/jobid/batch

-H "X-SFDC-Session: sessionId" -H "Content-Type: text/csv; 

SELECT name,desc from Contact

Step 4: Close the Bulk API Job

The next step is to close the job. This requires a text file with details of the job status change. Create it as below with the name close_job.txt.

<?xml version="1.0" encoding="UTF-8"?>

<jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload">

<state>Closed</state>

</jobInfo>

Use the file with the below command.

curl https://instance.Salesforce.com/services/async/47.0/job/jobId -H "X-SFDC-Session: sessionId" -H 
"Content-Type: application/xml; charset=UTF-8" -d @close_job.txt

Step 5: Access the Resulting API

Access the resulting API and fetch the result is of the batch. 

curl -H "X-SFDC-Session: sessionId" https://instance.Salesforce.com/services/async/47.0/job/jobId/batch/batchId/result

Step 6: Retrieve Results

Retrieve the actual results using the result id that was fetched from the above step.

curl -H "X-SFDC-Session: sessionId" 
https://instance.Salesforce.com/services/async/47.0/job/jobId/batch/batchId/result/resultId

The output will be a CSV file with the required rows of data. Save it as Contacts.csv in your local filesystem. 

Step 7: Load Data to PostgreSQL

Load data to Postgres using the COPY command. Assuming the table is already created this can be done by executing the below command.

COPY Contacts(name,desc,)

FROM 'contacts.csv' DELIMITER ',' CSV HEADER;

An alternative to using the above sequence of API calls is to use the Data Loader utility to query the data and export it to CSV. But in case you need to do this programmatically, Data Loader utility will be of little help. 

Limitations of using Custom ETL Scripts to Connect Salesforce to PostgreSQL

As evident from the above steps, loading data through the manual method contains a significant number of steps that could be overwhelming if you are looking to do this on a regular basis. You would need to configure additional scripts in case you need to bring data into real-time. 

  • It is time-consuming and requires prior knowledge of coding, understanding APIs and configuring data mapping. 
  • This method is not suitable for bulk data movement, leading to slow performance, especially for large datasets.

Conclusion

This blog talks about the different methods you can use to set up a connection from Salesforce to PostgreSQL in a seamless fashion. If you wants to know about PostgreSQL, then read this article: Postgres to Snowflake.

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. Hevo handles everything from schema management to data flow monitoring data rids you of any maintenance overhead. In addition to Salesforce, you can bring data from 150s of different sources into PostgreSQL in real-time, ensuring that all your data is available for analysis with you.

Visit our Website to Explore Hevo

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.

What are your thoughts on the two approaches to move data from Salesforce to PostgreSQL? Let us know in the comments.

mm
Former Director of Product Management, Hevo Data

Vivek Sinha has extensive experience in real-time analytics and cloud-native technologies. With a focus on Apache Pinot, he was a driving force in shaping innovation and defensible differentiators, including enhanced query processing, data mutability support, and cost-effective tiered storage solutions at Hevo. He also demonstrates a passion for exploring and implementing innovative trends within the dynamic data industry landscape.

No-code Data Pipeline for PostgreSQL

Get Started with Hevo