PostgreSQL is a free database management system that can handle both relational (single-valued) and non-relational (multi-valued, hierarchical) data types. MySQL is an Oracle-developed open-source Relational Database Management System. It’s one of the first database solutions, with a slew of useful features.

In some circumstances, organizations prefer PostgreSQL to MySQL, and transferring data helps them to examine and deal with data more quickly. This article talks about the different methods you can use to convert PostgreSQL to MySQL seamlessly. It also gives a brief introduction to PostgreSQL and MySQL Server before diving into the Postgres to MySQL data transfer methods.

Methods to Connect PostgreSQL to MySQL

The three methods are given below:

Method 1: Connect PostgreSQL to MySQL Server using Hevo

Hevo helps you directly transfer data from various sources such as PostgreSQL Database, Business Intelligence tools, Data Warehouses, or a destination of your choice such as MySQL in a completely hassle-free & automated manner.

Hevo is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready.

Sign up here for a 14-Day Free Trial!

The following steps can be implemented to connect PostgreSQL to MySQL using Hevo:

  • Configure Source: Connect Hevo Data with PostgreSQL providing a unique name for your Pipeline, along with details such as  Database Host, Database Port, Database User, Database Password, Database Name and about your Data Source. You also can choose the Data Ingestion method while configuring PostgreSQL as a source
PostgreSQL to MySQL: Configure PostgreSQL Source
Image Source
  • Configure Destination: Establish a connection to MySQL by providing information about its credentials such as Destination Name, Database Host, Database Port, Database User, Database Password, Database Name and Schema Name. 
PostgreSQL to MySQL: Configure MySQL Destination
Image Source

Here are more reasons to try Hevo:

  • Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  •  Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Monitoring: Advanced monitoring gives you a one-stop view to watch all the activities that occur within Data Pipeline.
Learn more about Hevo

Method 2: Manually Connect PostgreSQL to MySQL using CSV Files

Let’s understand this manual method of connecting PostgreSQL to MySQL with help of an example: 

Step 1: Extracting Table

The first step in migrating data from your PostgreSQL to MySQL is to extract the table definition from PostgreSQL. The most common option is to use pg_dump, but let’s look at how it works with the first table (actor):

-bash-4.2$ <strong>pg_dump -st actor dvdrental</strong>
-- PostgreSQL database dump

-- Dumped from database version 12.8
-- Dumped by pg_dump version 12.8

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_table_access_method = heap;

-- Name: actor; Type: TABLE; Schema: public; Owner: postgres

<span class="tadv-color" style="color:var(--paletteColor1, #dc9d4a)">CREATE TABLE (
    actor_id integer DEFAULT nextval('public.actor_actor_id_seq'::regclass) NOT NULL,
    first_name character varying(45) NOT NULL,
    last_name character varying(45) NOT NULL,
    last_update timestamp without time zone DEFAULT now() NOT NULL


-- Name: actor actor_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres

<span class="tadv-color" style="color:var(--paletteColor1, #dc9d4a)">ALTER TABLE ONLY
    ADD CONSTRAINT actor_pkey PRIMARY KEY (actor_id);</span>

-- Name: idx_actor_last_name; Type: INDEX; Schema: public; Owner: postgres

<span class="tadv-color" style="color:var(--paletteColor1, #dc9d4a)">CREATE INDEX idx_actor_last_name ON USING btree (last_name);</span>

-- Name: actor last_updated; Type: TRIGGER; Schema: public; Owner: postgres


-- PostgreSQL database dump complete

Step 2: Table Definition

Only the text in orange is significant to us when importing the data from that output (eventually the indexes can also be created later).

If you’re familiar with MySQL, you’ll see that the SQL syntax returned will not work there.

Let’s take a look at the definition of the table:

    actor_id integer DEFAULT nextval('public.actor_actor_id_seq'::regclass) NOT NULL,
    first_name character varying(45) NOT NULL,
    last_name character varying(45) NOT NULL,
    last_update timestamp without time zone DEFAULT now() NOT NULL

As you can see:

  • There is a prefix that is public.
  • The main key isn’t defined right now (although it will be later as a constraint).
  • The following is an example of a user-defined type, public.actor_id seq is the sequence. 
  • Character changing is actually VARCHAR.
  • The timestamp column makes use of various MySQL-specific information, without regard to time zone.

This is a manual step, the statement must be rewritten to work in MySQL:

    actor_id integer auto_increment NOT NULL primary key,
    first_name varchar(45) NOT NULL,
    last_name varchar(45) NOT NULL,
    last_update timestamp DEFAULT now() NOT NULL

Step 3: Adding Secondary Indexes

You can now add the secondary indexes to your MySQL Database Service instance using the newly rewritten statement:

PostgreSQL to MySQL: Adding Secondary Indexes
Image Source

You must carry out this operation for all tables in the dvdrental schema:

dvdrental-# dt
             List of relations
 Schema |     Name      | Type  |  Owner   
 public | actor         | table | postgres
 public | address       | table | postgres
 public | category      | table | postgres
 public | city          | table | postgres
 public | country       | table | postgres
 public | customer      | table | postgres
 public | film          | table | postgres
 public | film_actor    | table | postgres
 public | film_category | table | postgres
 public | inventory     | table | postgres
 public | language      | table | postgres
 public | payment       | table | postgres
 public | rental        | table | postgres
 public | staff         | table | postgres
 public | store         | table | postgres
(15 rows)

When numerous USER-DEFINED data types are utilized, like in the table film, this is the most difficult part:

rating public.mpaa_rating DEFAULT 'G'::public.mpaa_rating,

This is how you can find out what it means:

dvdrental=# select enum_range(null::mpaa_rating);
(1 row)

You can see that this is an ENUM, so we’ll rename the column as follows:

rating ENUM('G','PG','PG-13','R','NC-17') DEFAULT 'G',

Step 4: Exporting Data to CSV

It’s now time to export the PostgreSQL data to CSV:

dvdrental=# copy actor to '/vagrant/output/actor.csv' delimiter ',' CSV HEADER;
COPY 200
dvdrental=# copy address to '/vagrant/output/address.csv' delimiter ',' CSV HEADER;
COPY 603
dvdrental=# copy category to '/vagrant/output/category.csv' delimiter ',' CSV HEADER;
dvdrental=# copy customer to '/vagrant/output/customer.csv' delimiter ',' CSV HEADER;
COPY 599
dvdrental=# copy film to '/vagrant/output/film.csv' delimiter ',' CSV HEADER;
COPY 1000

Step 5: Importing Data to MySQL

After you’ve exported all of the tables you want to import, you can utilize MySQL Shell’s import table utility:

PostgreSQL to MySQL: Importing Data to MySQL 1
Image Source

importTable() can be used in many threads(Not in this case since the data is too tiny). You can repeat the process for each table you want to import. importTable() can also combine several files into a single table. You can confirm that the data has been imported by looking at the following:

PostgreSQL to MySQL: Importing Data to MySQL 2
Image Source

You have successfully migrated data from PostgreSQL to MySQL using CSV files.

Method 3: Manually Connect PostgreSQL to MySQL using pg2mysql php Script

Step 1: Downloading the Script and PHP

Downloading the script and unpacking the archive is the first step to connect PostgreSQL to MySQL using pg2mysql php script.

wget  (contains original script from the author)

wget (contains our modified script for php version 7.4)

tar -xvf pg2mysql-1.9.tar.bz2(in case of original script from the author)
unzip (in case of our modified script for php version 7.4)

Installing PHP

sudo apt install php7.4-cli (in case of our modified script)

Step 2: Creating Dump Database for PostgreSQL and running it

Make a .sql dump of your PostgreSQL database, being sure to use the “—format p —inserts” option.

sudo -u postgres pg_dump --format p --inserts DBNAME_HERE > /path/to/file.sql

Navigate to the pg2mysql-1.9 folder.

cd pg2mysql-1.9/

Run this file which you have created.

php pg2mysql_cli.php /path/to/pd/dump/file.sql /path/where/to/save/mysql/file.sql

There will be some lines like this.

Completed!     30820 lines        5539 sql chunks
 - No its not perfect
 - Yes it discards ALL stored procedures
 - Yes it discards ALL queries except for CREATE TABLE and INSERT INTO 
 - Yes you can email us suggestsions: info[AT]
    - In emails, please include the Postgres code, and the expected MySQL code
 - If you're having problems creating your postgres dump, make sure you use "--format p --inserts"
 - Default output engine if not specified is MyISAM"
  • The corrected sql dump file will now be created. It’s recommended that you read it and replace MyISAM with InnoDB wherever possible.
  • At this point, you can restore the dump to a clean MySQL database.
  • Since the script does not save indexes (mentioned by the creator), you must manually add them to each table (it will work without indexes but it may cause serious performance issues). You must manually add them or use your own custom script to do so.

Limitations of Manually Connecting PostgreSQL to MySQL 

  • The data transfer from PostgreSQL to MySQL is only one-way. Two-way sync is essential to keep both tools up to date.
  • Since the records must be updated on a regular basis, the manual approach takes time. This wastes time and resources that could be better spent on more important business tasks.
  • Updating current data and maintaining workflows on multiple platforms necessitates a significant amount of engineering bandwidth, which can be a source of frustration for many users.
  • During data transfer, there is no room for transformation. For firms who wish to update their data before exporting it from PostgreSQL to MySQL, this might be a considerable drawback.

Factors to Consider for Postgres to MySQL Migration

A few differences between Postgres and MySQL should be considered before moving from Postgres to MySQL. A few of these are mentioned below.

  • Complexity brought on by certain data models: Numerous data types are supported by both Postgres and MySQL. This covers a wide range of data formats, including JSON, XML, and TEXT, as well as conventional SQL types like String, Boolean, Integer, and Timestamp. It’s important to remember, though, that MySQL does not handle all Postgres data types.
  • Differences between SQL and database capabilities: There are several operations that are performed differently in Postgres and MySQL databases. Additionally, some features could be supported in one but not in the other. Understanding this might help you steer clear of certain typical traps.
  • Stored procedures: The process must be written in the regular SQL syntax according to MySQL. In comparison, the procedures in Postgres are function-based and may be written in a variety of languages, including Ruby, Perl, SQL, Python, JavaScript, and others.
  • Extensions for Postgres: It can complicate a migration if you use Postgres extensions; thus, before you migrate, be sure to audit each one separately.
  • Case sensitivity and support for IF/IFNULL: If a Postgres table or column name is enclosed in double quotations, it is case-sensitive. The names of MySQL tables and columns, however, are not case-sensitive. For this reason, bear this in mind while doing a migration. Additionally, Postgres does not support IF and IFNULL statements for condition evaluation, but MySQL does.

Related: If you are looking to streamline your PostgreSQL workflow, do read our blog on PostgreSQL import CSV, saving you time and effort. And if you’re interested in optimizing performance and achieving high availability, don’t miss our guide on setting up PostgreSQL clusters [+clustering options].


In this article, you got a glimpse of how to connect PostgreSQL to MySQL after a brief introduction to the salient features, and use cases. The methods talked about in this article are using CSV files and pg2mysql php script.. The process can be a bit difficult for beginners. Moreover, you will have to update the data each and every time it is updated and this is where Hevo saves the day!

Visit our Website to Explore Hevo

Hevo Data provides its users with a simpler platform for integrating data from 150+ data sources for Analysis. It is a No-code Data Pipeline that can help you combine data from multiple sources like PostgreSQL. You can use it to transfer data from multiple data sources into your Data Warehouses, Database, or a destination of your choice such as MySQL .

It provides you with a consistent and reliable solution to managing data in real-time, ensuring that you always have Analysis-ready data in your desired destination.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable Hevo Pricing that will help you choose the right plan for your business needs.

Share your experience of learning about PostgreSQL to MySQL! Let us know in the comments section below!

Former Research Analyst, Hevo Data

Harsh comes with experience in performing research analysis who has a passion for data, software architecture, and writing technical content. He has written more than 100 articles on data integration and infrastructure.

Associate Customer Experience Engineer, Hevo Data

As a Customer Experience Engineer at Hevo, Anmol plays a crucial role in providing dedicated support to clients worldwide. His proficiency in SQL and SQLite, PostgreSQL coupled with a customer-centric approach, ensures that clients receive optimal solutions to their data integration challenges.

No-code Data Pipeline For MySQL