PostgreSQL has many advantages which include compatibility with multiple data types, a reliable environment with special features such as Multi-Version Concurrency Control (MVCC) etc. But, it would be a good option to move your data to MySQL, if your use case want to use its capabilities. This includes long-term data retention, and many others for complex data analysis.

In some circumstances, organizations prefer PostgreSQL to MySQL, and transferring data helps them to examine and deal with data more quickly. This article talks about the different methods you can use to convert PostgreSQL to MySQL seamlessly.

What is PostgreSQL?

It is written in C and has a monolithic architecture, which means that all of the components are connected and work in a symmetrical and systematic manner. It has a vast community and provides a lot of community assistance as well as extra help to its paying members.

Healthcare, banking, and manufacturing are just a few of the businesses that employ it. It also makes it possible to install new backup solutions.Monolithic architecture implies that all of the components are linked and work in a logical order. It offers both community support and additional assistance to some of its paid customers. It is frequently used in healthcare, banking, and industrial areas due to its innovative backup methods.

What is MySQL?

MySQL is an open-source and free database software that is supported by Oracle and is released under the GNU license.

When compared to Microsoft SQL Server and Oracle Database, MySQL is a faster, more scalable, and easier-to-use Database Management System. It is based on the Client-Server concept, which implies that the database is normally hosted on a server and data is accessed via network clients and workstations.

The server responds to the clients’ Graphical User Interface (GUI) queries with the desired result. MySQL works with a variety of operating systems and languages, including PHP, PERL, JAVA, C++, and C.

Methods to Connect PostgreSQL to MySQL

The four methods are given below:

Method 1: Manually Connect PostgreSQL to MySQL using CSV Files

Let’s understand this manual method of connecting PostgreSQL to MySQL with help of an example: 

Step 1: Extracting Table

The first step in migrating data from our PostgreSQL to MySQL is to extract the table definition from PostgreSQL. The most common option is to use pg_dump, but let’s look at how it works with the first table (actor):

-bash-4.2$ <strong>pg_dump -st actor dvdrental</strong>
--
-- PostgreSQL database dump
--

-- Dumped from database version 12.8
-- Dumped by pg_dump version 12.8

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_table_access_method = heap;

--
-- Name: actor; Type: TABLE; Schema: public; Owner: postgres
--

<span class="tadv-color" style="color:var(--paletteColor1, #dc9d4a)">CREATE TABLE public.actor (
    actor_id integer DEFAULT nextval('public.actor_actor_id_seq'::regclass) NOT NULL,
    first_name character varying(45) NOT NULL,
    last_name character varying(45) NOT NULL,
    last_update timestamp without time zone DEFAULT now() NOT NULL
);</span>


ALTER TABLE public.actor OWNER TO postgres;

--
-- Name: actor actor_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres
--

<span class="tadv-color" style="color:var(--paletteColor1, #dc9d4a)">ALTER TABLE ONLY public.actor
    ADD CONSTRAINT actor_pkey PRIMARY KEY (actor_id);</span>


--
-- Name: idx_actor_last_name; Type: INDEX; Schema: public; Owner: postgres
--

<span class="tadv-color" style="color:var(--paletteColor1, #dc9d4a)">CREATE INDEX idx_actor_last_name ON public.actor USING btree (last_name);</span>


--
-- Name: actor last_updated; Type: TRIGGER; Schema: public; Owner: postgres
--

CREATE TRIGGER last_updated BEFORE UPDATE ON public.actor FOR EACH ROW EXECUTE FUNCTION public.last_updated();


--
-- PostgreSQL database dump complete
--

Step 2: Table Definition

Only the text in orange is significant when importing our data from that output (eventually the indexes can also be created later).

If you’re familiar with MySQL, you’ll see that the SQL syntax returned will not work there.

Let’s take a look at the definition of the table:

CREATE TABLE public.actor (
    actor_id integer DEFAULT nextval('public.actor_actor_id_seq'::regclass) NOT NULL,
    first_name character varying(45) NOT NULL,
    last_name character varying(45) NOT NULL,
    last_update timestamp without time zone DEFAULT now() NOT NULL
)

As you can see:

  • There is a prefix that is public.
  • The main key isn’t defined right now (although it will be later as a constraint).
  • The following is an example of a user-defined type, public.actor_id seq is the sequence. 
  • Character changing is actually VARCHAR.
  • The timestamp column makes use of various MySQL-specific information, without regard to time zone.

This is a manual step, the statement must be rewritten to work in MySQL:

CREATE TABLE actor (
    actor_id integer auto_increment NOT NULL primary key,
    first_name varchar(45) NOT NULL,
    last_name varchar(45) NOT NULL,
    last_update timestamp DEFAULT now() NOT NULL
);

Step 3: Adding Secondary Indexes

We can now add the secondary indexes to your MySQL Database Service instance using the newly rewritten statement:

PostgreSQL to MySQL: Adding Secondary Indexes
Image Source

We must carry out this operation for all tables in the dvdrental schema:

dvdrental-# dt
             List of relations
 Schema |     Name      | Type  |  Owner   
--------+---------------+-------+----------
 public | actor         | table | postgres
 public | address       | table | postgres
 public | category      | table | postgres
 public | city          | table | postgres
 public | country       | table | postgres
 public | customer      | table | postgres
 public | film          | table | postgres
 public | film_actor    | table | postgres
 public | film_category | table | postgres
 public | inventory     | table | postgres
 public | language      | table | postgres
 public | payment       | table | postgres
 public | rental        | table | postgres
 public | staff         | table | postgres
 public | store         | table | postgres
(15 rows)

When numerous USER-DEFINED data types are utilized, like in the table film, this is the most difficult part:

rating public.mpaa_rating DEFAULT 'G'::public.mpaa_rating,

This is how we can find out what it means:

dvdrental=# select enum_range(null::mpaa_rating);
      enum_range      
----------------------
 {G,PG,PG-13,R,NC-17}
(1 row)

We can see that this is an ENUM, so we’ll rename the column as follows:

rating ENUM('G','PG','PG-13','R','NC-17') DEFAULT 'G',

Step 4: Exporting Data to CSV

It’s now time to export the PostgreSQL data to CSV:

dvdrental=# copy actor to '/vagrant/output/actor.csv' delimiter ',' CSV HEADER;
COPY 200
dvdrental=# copy address to '/vagrant/output/address.csv' delimiter ',' CSV HEADER;
COPY 603
dvdrental=# copy category to '/vagrant/output/category.csv' delimiter ',' CSV HEADER;
COPY 16
dvdrental=# copy customer to '/vagrant/output/customer.csv' delimiter ',' CSV HEADER;
COPY 599
dvdrental=# copy film to '/vagrant/output/film.csv' delimiter ',' CSV HEADER;
COPY 1000

Step 5: Importing Data to MySQL

After we’ve exported all of the tables you want to import, we can utilize MySQL Shell’s import table utility:

PostgreSQL to MySQL: Importing Data to MySQL 1
Image Source

importTable() can be used in many threads(Not in this case since the data is too tiny). We can repeat the process for each table you want to import. importTable() can also combine several files into a single table. We can confirm that the data has been imported by looking at the following:

PostgreSQL to MySQL: Importing Data to MySQL 2
Image Source

We have successfully migrated data from PostgreSQL to MySQL using CSV files.

Method 2: Using Automated Data Pipeline Platforms

The following steps can be implemented to connect PostgreSQL to MySQL using Hevo:

Step 1: Configure PostgreSQL as your Source

  • Click PIPELINES in the Navigation Bar.
  • Click + CREATE in the Pipelines List View.
  • In the Select Source Type page, select PostgreSQL as your source.
  • Connect Hevo Data with PostgreSQL, providing a unique name for your Pipeline, along with details such as Database Host, Database Port, Database User, Database Password, Database Name, and about your Data Source. You also can choose the Data Ingestion method while configuring PostgreSQL as a source
PostgreSQL to MySQL: Configure PostgreSQL Source
Image Source

Step 2: Configure MySQL as your Destination

  • Click DESTINATIONS in the Navigation Bar.
  • Click + CREATE in the Destinations List View.
  • In the Add Destination page, select MySQL.
  • Establish a connection to MySQL by providing information about its credentials such as Destination Name, Database Host, Database Port, Database User, Database Password, Database Name and Schema Name. 
PostgreSQL to MySQL: Configure MySQL Destination
Image Source

We have now successfully connected PostgreSQL to MySQL using Hevo.

Here are more reasons to try Hevo:

  • Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
  • Schema Management: Hevo can automatically detect the schema of the incoming data and map it to the destination schema.
  •  Incremental Data Load: Hevo allows the transfer of data that has been modified in real time. This ensures efficient utilization of bandwidth on both ends.
  • Live Monitoring: Advanced monitoring gives you a one-stop view to watch all the activities that occur within Data Pipeline.
Learn more about Hevo

Method 3: Manually Connect PostgreSQL to MySQL using pg2mysql php Script

Step 1: Downloading the Script and PHP

Downloading the script and unpacking the archive is the first step to connecting PostgreSQL to MySQL using the pg2mysql php script.

wget http://www.lightbox.ca/pg2mysql/pg2mysql-1.9.tar.bz2  (contains original script from the author)

wget https://www.easyredmine.com/ER/media/knowledge-base/pg2mysql-1.9.zip (contains our modified script for php version 7.4)

tar -xvf pg2mysql-1.9.tar.bz2(in case of original script from the author)
unzip pg2mysql-1.9.zip (in case of our modified script for php version 7.4)

Installing PHP

sudo apt install php7.4-cli (in case of our modified script)

Step 2: Creating Dump Database for PostgreSQL and running it

Make a .sql dump of your PostgreSQL database, being sure to use the “—format p —inserts” option.

sudo -u postgres pg_dump --format p --inserts DBNAME_HERE > /path/to/file.sql

Navigate to the pg2mysql-1.9 folder.

cd pg2mysql-1.9/

Run this file which we have created.

php pg2mysql_cli.php /path/to/pd/dump/file.sql /path/where/to/save/mysql/file.sql

There will be some lines like this.

Completed!     30820 lines        5539 sql chunks
Notes:
 - No its not perfect
 - Yes it discards ALL stored procedures
 - Yes it discards ALL queries except for CREATE TABLE and INSERT INTO 
 - Yes you can email us suggestsions: info[AT]lightbox.org
    - In emails, please include the Postgres code, and the expected MySQL code
 - If you're having problems creating your postgres dump, make sure you use "--format p --inserts"
 - Default output engine if not specified is MyISAM"
  • The corrected sql dump file will now be created. It’s recommended that you read it and replace MyISAM with InnoDB wherever possible.
  • At this point, you can restore the dump to a clean MySQL database.
  • Since the script does not save indexes (mentioned by the creator), you must manually add them to each table (it will work without indexes but it may cause serious performance issues). You must manually add them or use your own custom script.

Learn about 3 Best Methods to Export MySQL Database

Limitations of Manually Connecting PostgreSQL to MySQL 

  • The data transfer from PostgreSQL to MySQL is only one-way. Two-way sync is essential to keep both tools up to date.
  • Since the records must be updated on a regular basis, the manual approach takes time. This wastes time and resources that could be better spent on more important business tasks.
  • Updating current data and maintaining workflows on multiple platforms necessitates a significant amount of engineering bandwidth, which can be a source of frustration for many users.
  • During data transfer, there is no room for transformation. For firms who wish to update their data before exporting it from PostgreSQL to MySQL, this might be a considerable drawback.

Method 4: Migrate Postgre to MySQL using the MySQL Workbench Migration Wizard

Step 1: Download and install the PostgreSQL ODBC driver

Once we install the PostgreSQL, we need to install an ODBC driver wherever we have installed MySQL workbench.

Step 2: Open MySQL Workbench and start the migration wizard

On the MySQL workbench screen, Click on the Database > Migrate on the main screen to start the Migration wizard. 

PostgreSQL to mysql: Start Migration Wizard
Start Migration Wizard

We can see an overview page for the Migration wizard below.

PostgreSQL to mysql: Overview page for the Migration wizard
Overview page for the Migration wizard

Step 3: Set up the parameters required for connecting to the source database

  • Click on the Start Migration button to connect to the Source Selection page.
  • Specify all the necessary information and click the Test Connection button to check the connection with PostgreSQL. 
  • Click on the Next button to move to the destination selection page.
PostgreSQL to mysql: Destination selection page
Destination selection page

Step 4: Set up the parameters required for connecting to the destination database

  • Specify the required parameters on the destination setup page to connect to the MySQL Server instance.
  • Click on the Test Connection button to verify the connection and click on the Next button to move to the Schema setup page.
PostgreSQL to mysql: Schema setup page
Schema setup page

Step 5: Select the schema for migration

The migration wizard communicates with the PostgreSQL database to fetch the schema data into the source database. Verify the tasks and click on the Next button to finish the schema setup for migration.

PostgreSQL to mysql: Finish the schema setup for migration
Finish the schema setup for migration

Step 6: Select the objects for migration

  • Now, the Retrieved schema from the source will be reverse-engineered to determine its structure. 
  • In the source objects page, we can see the list of objects retrieved for migration.
PostgreSQL to mysql: List of objects retrieved for migration
List of objects retrieved for migration
  • Click the Show Selection button to see the table objects in the source database. Click on the Next button to review the proposed migration.

Step 7: Review the proposed migration

In this step, the Migration wizard converts the objects into equivalent objects in MySQL and creates a MySQL code for the destination server.

On the Manual Editing tab on the left side navigation panel. We can double-click on the Pagila row and rename the resultant database. It will also show the table columns, default values, and other attributes.

Step 8: Run the MySQL code to create the database objects

Click on the Target Creation Options tab on the left side navigation pane. We will be given the option to run the generated code in the target RDBMS. Click on the Next button and move to the transfer data setup page.

PostgreSQL to mysql: Transfer data setup page
Transfer data setup page

The progress of the migration can be seen on the Create Schemata tab on the left-side navigation pane. Once the migration is done, click on the Next button and move to the transfer setup page.

Step 9: Transfer the data to MySQL database

The data transfer setup page allows you to configure the transfer process, as given below.

PostgreSQL to mysql: Configure the transfer process
Configure the transfer process

Select the first option of live transfer of data and click on the Next button. Once it is done, we can see the summary of the transfer process on the report page and click on the Finish button to close the Migration wizard.

PostgreSQL to mysql: Close Migration Wizard
Close Migration Wizard

Factors to Consider for Postgres to MySQL Migration

A few differences between Postgres and MySQL should be considered before moving from Postgres to MySQL. A few of these are mentioned below.

  • Complexity brought on by certain data models: Numerous data types are supported by both Postgres and MySQL. This covers a wide range of data formats, including JSON, XML, and TEXT, as well as conventional SQL types like String, Boolean, Integer, and Timestamp. It’s important to remember, though, that MySQL does not handle all Postgres data types.
  • Differences between SQL and database capabilities: There are several operations that are performed differently in Postgres and MySQL databases. Additionally, some features could be supported in one but not in the other. Understanding this might help you steer clear of certain typical traps.
  • Stored procedures: The process must be written in the regular SQL syntax according to MySQL. In comparison, the procedures in Postgres are function-based and may be written in a variety of languages, including Ruby, Perl, SQL, Python, JavaScript, and others.
  • Extensions for Postgres: It can complicate a migration if you use Postgres extensions; thus, before you migrate, be sure to audit each one separately.
  • Case sensitivity and support for IF/IFNULL: If a Postgres table or column name is enclosed in double quotations, it is case-sensitive. The names of MySQL tables and columns, however, are not case-sensitive. For this reason, bear this in mind while doing a migration. Additionally, Postgres does not support IF and IFNULL statements for condition evaluation, but MySQL does.

Related: If you are looking to streamline your PostgreSQL workflow, do read our blog on PostgreSQL import CSV, saving you time and effort. And if you’re interested in optimizing performance and achieving high availability, don’t miss our guide on setting up PostgreSQL clusters [+clustering options].

Postgres vs MySQL schema- Key Differences

VARCHAR in Postgres vs MySQL

For the varchar type, you need to set up a max value in MySQL while this is not required for Postgres. Instead of this, you can use the text type in MySQL, or have the same varchar type. But, you should specify a max length for the same. If you go for the varchar option,keep in mind the max value so that there won’t be any troubles while migrating your data.  

SERIAL in Postgres vs MySQL

When you set the serial type in MySQL, the underlying data type by default will be bigint unsigned auto_increment. Overall, this wouldn’t have an impact on how you store your data. But, its good to know this as the data type will be different based on the records you have.

Spatial data in Postgres vs MySQL: POINT

Using spatial data is straightforward in PostgreSQL. When you define the location column as a type of Point, it is possible to insert spatial data by providing the required coordinates. In MySQL, spatial data types have different behaviour. Based on the version of your MySQL, you might need to have a different approach to store spatial data.

Use Cases for PostgreSQL to MySQL Integration

  • MySQL enables complex data analysis and insights beyond Postgres’ capabilities alone through powerful querying and processing.
  • Centralizing data from Postgres and other sources in MySQL provides a unified view across systems. This consolidated data can enable change data capture to prevent future discrepancies.
  • MySQL’s capabilities for long-term data retention facilitate historical trend analysis over time, without Postgres’ limits.
  • Syncing Postgres data to MySQL leverages MySQL’s robust security features for advanced data governance and compliance.
  • MySQL scales to handle large, growing data volumes from Postgres without performance impacts.
  • With Postgres data in MySQL, machine learning for predictive analytics, customer segmentation and more is enabled.
  • Visualization tools like Tableau, PowerBI and Looker that connect to MySQL enhance business intelligence, augmenting Postgres’ reporting.

Additional Resources for PostgreSQL Integrations and Migrations

Conclusion

In this article, we got a glimpse of how to connect PostgreSQL to MySQL after a brief introduction to the salient features, and use cases. The manual methods talked about in this article are using CSV files, pg2mysql php script, and MySQL Workbench. The process can be a bit difficult for beginners. Moreover, you will have to update the data each and every time it is updated and this is where Hevo saves the day!

Visit our Website to Explore Hevo

Hevo Data provides its users with a simpler platform for integrating data from 150+ data sources for Analysis. It is a No-code Data Pipeline that can help you combine data from multiple sources like PostgreSQL. You can use it to transfer data from multiple data sources into your Data Warehouses, Database, or a destination of your choice such as MySQL .

It provides you with a consistent and reliable solution to managing data in real-time, ensuring that you always have Analysis-ready data in your desired destination.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable Hevo Pricing that will help you choose the right plan for your business needs.

Share your experience of learning about PostgreSQL to MySQL! Let us know in the comments section below!

mm
Associate Customer Experience Engineer, Hevo Data

As a Customer Experience Engineer at Hevo, Anmol plays a crucial role in providing dedicated support to clients worldwide. His proficiency in SQL and SQLite, PostgreSQL coupled with a customer-centric approach, ensures that clients receive optimal solutions to their data integration challenges.

Harsh Varshney
Research Analyst, Hevo Data

Harsh is a data enthusiast with over 2.5 years of experience in research analysis and software development. He is passionate about translating complex technical concepts into clear and engaging content. His expertise in data integration and infrastructure shines through his 100+ published articles, helping data practitioners solve challenges related to data engineering.

No-code Data Pipeline For MySQL