Migrating from MySQL to PostgreSQL can seem daunting, but it doesn’t have to be. PostgreSQL offers significant advantages over MySQL in terms of features, performance, and stability. Transferring your database to PostgreSQL can position you for better scalability. 

In this blog, we will learn how to migrate MySQL data to PostgreSQL easily. By the end of this blog, you will learn the step-by-step process of connecting these two and also understand which method is the best for your case.

Why Do We Need to Migrate MySQL Data to PostgreSQL? 

  • Advanced Features and Capabilities: PostgreSQL offers advanced features such as support for complex queries, full-text search, JSONB data type, and better handling of concurrent transactions through Multi-Version Concurrency Control (MVCC).
  • Enhanced Performance and Scalability: PostgreSQL often performs better for complex queries and large datasets due to its sophisticated query planner and optimizer. It also scales well for read-heavy and write-heavy workloads.
  • Compliance and Extensibility: PostgreSQL is highly standards-compliant (e.g., ANSI SQL) and offers extensive support for custom data types, operators, and functions. This flexibility can be crucial for businesses needing custom solutions.

Pre-Migration Considerations

A successful migration requires careful planning. Key considerations include:

1. Data Type Compatibility:

  • Analyze: Assess and address differences in data types between MySQL and PostgreSQL.
  • Transform: Define and implement data type transformations where necessary.

2. Migration Configuration:

  • Create a configuration file: This allows for granular control over the migration process, including data mapping, transformations, and schema modifications.

3. Application Compatibility:

  • Analyze: Evaluate the impact of the migration on existing applications.
  • Test: Thoroughly test applications after migration to ensure compatibility.

4. Thorough Testing: Conduct unit, integration, and performance tests.

5. Backup and Recovery:

  • Develop a disaster recovery plan.
  • Create full backups.
  • Test backup and restore procedures.
Simplify MySQL to PostgreSQL Migration with Hevo

Make MySQL to PostgreSQL migration a breeze with Hevo’s no-code platform. Hevo automates the entire process, from data transfer to schema conversion, ensuring a seamless, real-time migration without any coding.  Use Hevo for:

  1. Simple two-step method for replicating MySQL data to PostgreSQL.
  2. Performing pre/post load transformations using drag-and-drop features.
  3. Real-time data sync to get analysis-ready data. 

Try Hevo and join a growing community of 2000+ data professionals who rely on us for seamless and efficient migrations. 

Get Started with Hevo for Free

What are the Methods to Perform This Migration?

Method 1: Using Pgloader

What is Pgloader? An Overview

Pgloader is an open-source database migration tool that enables you to seamlessly move your data from different RDBMSs, like MySQL and SQLite to PostgreSQL. It has the capability to perform real-time transformations on the data it retrieves and execute raw SQL commands both before and after the loading process. It employs the COPY PostgreSQL protocol to stream the data into the server efficiently and handles errors by generating a pair of reject.dat and reject.log files.

Before you move forward with connecting Pgloader to PostgreSQL, a few prerequisites that must be taken into consideration:

  • You must have access to servers that have a firewall.
  • Both servers must also have a non-root user with sudo privileges.
  • Use your PostgreSQL server as the MySQL client machine to migrate data using Pgloader.

Step 1: Create a Sample Database in MySQL

Begin by opening the MySQL prompt on MySQL server:

mysql -u root -p

Enter your MySQL user and password to open the prompt.

Now, let’s create a new database by giving the following command:

CREATE DATABASE sample_db;

With the USE command, switch to this database.

USE sample_db;

Use the following command to create a table in the database.

CREATE TABLE source_table (
	employee_id INT PRIMARY KEY,
	first_name VARCHAR(50),
	last_name VARCHAR(50),
	start_date DATE,
	salary VARCHAR(50)
);

Then, insert the following entries into the table.

INSERT INTO source_table (employee_id, first_name, last_name, start_date, salary) 
VALUES (1, 'Elizabeth', 'Cotten', '2007-11-11', '$105433.18'),
(2, 'Yanka', 'Dyagileva', '2017-10-30', '$107540.67'),
(3, 'Lee', 'Dorsey', '2013-06-04', '$118024.04'),
(4, 'Kasey', 'Chambers', '2010-08-18', '$116456.98'),
(5, 'Bram', 'Tchaikovsky', '2018-09-16', '$61989.50');

Now, we are ready to migrate our data to PostgreSQL. 

Step 2: Install Pgloader On Your System

Head to the Ubuntu APT repositories to install Pgloader using the apt command. However, if you want to use the SSL option in version v.3.5.1 and later, you’ll need to install Pgloader from the GitHub repository with the source code. 

Once the tool is installed, follow these steps to proceed with data migration.

  • Use the following command to update the package index of the PostgreSQL server:
postgres-server$ 
sudo apt update
  •  To install the dependencies of Pgloader, use the following commands and press ENTER.
<em>postgres-server$ sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev</em>
  • Next, head to GitHub project’s release page to find the most recent version of the tool. Then select Assets and copy the source code (tar.gz) link. Paste it into the curl command and then replace the https URL.
<em>postgres-server$ curl -fsSLO https://github.com/dimitri/pgloader/releases/download/v3.6.9/pgloader-bundle-3.6.9.tgz</em>
  • Input the following command to extract the tarball.
<em>postgres-server$ tar xvf v3.6.9.tar.gz</em>
  • Use the following command to head over to the parent directory of the new Pgloader.
<em>postgres-server$ cd pgloader-3.6.9/</em>
  • Once done, compile the Pgloader binary using the following command.
<em>postgres-server$ make pgloader</em>
  • Using the following command, move the Pgloader binary file to the /usr/local/bin directory.
<em>postgres-server$ sudo mv ./build/bin/pgloader /usr/local/bin/</em>
  • Finally, input the following command to confirm the installation of the Pgloader version.
<em>postgres-server$ pgloader –version</em>
Migrate MySQL to PostgreSQL
Migrate MySQL to MS SQL Server
Migrate PostgreSQL to MySQL

Step 3: Build a PostgreSQL Database and Role

The pgloader command copies source data from a file or a database and inserts it into a Postgres database. That is why you either need to run Pgloader as a Linux user with access to Postgres database or designate a Postgres role with the required rights in your load command.

  • Use the following command to create a new role and request a password for the role.
<em>postgres-server$ sudo -u postgres createuser --interactive -P</em>
  • You’ll then be prompted to add a name for the new role.
<em>Enter name of role to add:</em>
  • In the next step, you’ll be required to create and  confirm the password.
<em>Enter password for new role:</em>
<em>Enter it again:</em>
  • The script will then ask you if you would like to classify the new role as a superuser. 
Type ‘y’ and press ENTER.

Output

. . .

Shall the new role be a superuser? (y/n) y
  • Finally, you’ll need to create a target database to load your data.
<em>postgres-server$ sudo -u postgres createdb xxx</em>

Step 4: Create a Dedicated User in MySQL and Manage Certificates

It is imperative to encrypt your network connection before you can even think of migrating data from one system to another. Encryption prevents your data from falling into the wrong hands and provides a safe environment for the data to be migrated. With a dedicated MySQL user, Pgloader will migrate your data over an SSL connection.

  • Type in the following command into a MySQL prompt.
<em>mysql -u root -p</em>
  • Next, create a MySQL user and remember to replace your_postgres_server_id with the public IP address of Postgres. You’ll also need to replace your password with a safe password or passphrase.
<em>mysql> CREATE USER 'new_mysql_name'@'your_postgres_server_ip' IDENTIFIED BY 'password' REQUIRE SSL;</em>
  • Grant access to the new MySQL user to the target database.
<em>GRANT ALL ON source_db.* TO ' new_mysql_name'@'your_postgresql_server_ip';</em>
  • Run the following command to reload the grant tables and enable privilege changes.
<em>mysql> FLUSH PRIVILEGES;</em>
  • Exit the MySQL prompt using the following command.
<em>mysql> exit</em>
  • The next step would be to enable the SSL for Pgloader. Use the following command to add the ca.pem client-cert.pem files to Ubuntu’s trusted certificate store. Also, change the file names and add the .crt file extension.
postgres-server$ sudo cp ~/client-ssl/ca.pem /usr/local/share/ca-certificates/ca.pem.crt

postgres-server$ sudo cp ~/client-ssl/client-cert.pem /usr/local/share/ca-certificates/client-cert.pem.crt
  • Finally, look for the certificates using the following command.
<em>postgres-server$ sudo update-ca-certificates</em>
Migrate MySQL to PostgreSQL
Migrate MySQL to MS SQL Server
Migrate PostgreSQL to MySQL

Step 5: Migration of Data

pgLoader enables users to migrate an entire database with just a single command. In the case of migrating from a MySQL database to a PostgreSQL database on a separate server, the command would take the following syntax:

pgloader mysql://mysql_username:password@mysql_server_ip_/source_database_name?option_1=value&option_n=value 

postgresql://postgresql_role_name:password@postgresql_server_ip/target_database_name?option_1=value&option_n=value

This encompasses the pgloader command and two connection strings—one for the source and the other for the target databases. In both connection strings, the declaration begins by specifying the type of DBMS to which the connection string refers, followed by the username and password with database access (separated by a colon), the server’s host address where the database is located, the name of the database that pgLoader should target, and various options influencing pgLoader’s behavior.

You can use the following command to migrate data to PostgreSQL. Make sure to enter your database details in the command.

pgloader mysql://pgloader_my:mysql_password@mysql_server_ip/sample_db?useSSL=true postgresql://pgloader_pg:postgresql_password@localhost/new_db

You will see the following output if the command is successful.

Output
. . .
             table name     errors       rows      bytes      total time
-----------------------  ---------  ---------  ---------  --------------
        fetch meta data          0          2                     0.111s
         Create Schemas          0          0                     0.001s
       Create SQL Types          0          0                     0.005s
          Create tables          0          2                     0.017s
         Set Table OIDs          0          1                     0.010s
-----------------------  ---------  ---------  ---------  --------------
 source_db.sample_table          0          5     0.2 kB          0.048s
-----------------------  ---------  ---------  ---------  --------------
COPY Threads Completion          0          4                     0.052s
 Index Build Completion          0          1                     0.011s
         Create Indexes          0          1                     0.006s
        Reset Sequences          0          0                     0.014s
           Primary Keys          0          1                     0.001s
    Create Foreign Keys          0          0                     0.000s
        Create Triggers          0          0                     0.000s
       Install Comments          0          0                     0.000s
-----------------------  ---------  ---------  ---------  --------------
      Total import time          ✓          5     0.2 kB          0.084s

Step 6: Execute the Following Commands to Check if the Migration Was Successful

  • To verify the accuracy of the data migration, access the PostgreSQL prompt
<em>postgres-server$ sudo -i -u postgres psql</em>
  • Next, establish a connection to the database where you loaded the data:
<em>postgres-server$ \c </em>new_db
  • Then run the following query to test whether the migrated data is stored in your PostgreSQL database:
SELECT * FROM sample_db.source_table;

If the data was loaded correctly, you will see an output table of the data you migrated.

Output
 employee_id | first_name |  last_name  | start_date |   salary   
-------------+------------+-------------+------------+------------
           1 | Elizabeth  | Cotten      | 2007-11-11 | $105433.18
           2 | Yanka      | Dyagileva   | 2017-10-30 | $107540.67
           3 | Lee        | Dorsey      | 2013-06-04 | $118024.04
           4 | Kasey      | Chambers    | 2010-08-18 | $116456.98
           5 | Bram       | Tchaikovsky | 2018-09-16 | $61989.50
(5 rows)

Close the Postgres prompt with the following command:

\q

Other Options for Data Migration using pgloader

pgloader is a flexible tool that supports multiple ways of connecting MySQL and PostgreSQL. Let us look at some of them:

1. Using pgloader Load File

Step 1- Create Load File: Create a file named pgload_test.load on your PostgreSQL server using nano pgload_test.load.

Step 2- Define Sources and Destinations: Specify the MySQL source and PostgreSQL destination within the LOAD DATABASE command:

LOAD DATABASE

    FROM mysql://<user>:<password>@<mysql_host>/<mysql_db>?useSSL=true

    INTO pgsql://<user>:<password>@<postgres_host>/<postgres_db>

Step 3- Configure Behavior (WITH clause):

  • Include drop: Instructs pgloader to drop existing tables in the PostgreSQL database before creating new ones. Use with caution, as it can lead to data loss.
  • create tables: Tells pgloader to create tables in the PostgreSQL database based on the MySQL schema.

Step 4- Add Custom SQL (Optional):

Include SQL commands after the WITH clause, such as:

ALTER SCHEMA '<source_schema>' RENAME TO 'public'; 

Step 5- Execute Migration: Run the command pgloader pgload_test.load

Step 6- Verify Migration: Check the PostgreSQL database for successfully migrated data..

2. Migration of MySQL database to PostgreSQL Locally

You can utilize pgLoader to migrate a MySQL database to a PostgreSQL database located on the same machine. Execute the migration command from a Linux user profile with access for the root MySQL user:

pgloader mysql://root@localhost/sample_db pgsql://sammy:postgresql_password@localhost/target_db

Conducting a local migration in this manner eliminates the need to modify MySQL’s default networking configuration or your system’s firewall rules.

3. Migration from CSV Files

It is possible to migrate MySQL data from a CSV file to PostgreSQL.

Suppose you have a CSV file load.csv, the command for migration will look like this:

pgloader load.csv pgsql://sammy:password@localhost/target_db

Also Read: How to Export a PostgreSQL Table to a CSV File

Method 2: Using Foreign Data Wrapper

MySQL to PostgreSQL - connect using Foreign Data Wrapper

To connect MySQL and PostgreSQL through Foreign Data Wrapper, you must take note of the following as this is what will be required to create the connection:

  • CREATE EXTENSION command to create a MySQL Foreign Data Wrapper extension to the PostgreSQL host.
  • CREATE SERVER command to define a connection to the MySQL Server.
  • CREATE USER MAPPING command to define a mapping that associates a Postgres role with the server.
  • CREATE FOREIGN TABLE command to define a single table in the Postgres database that corresponds to a table that resides on the MySQL Server or use the IMPORT FOREIGN SCHEMA command to import multiple remote tables in the local schema.

Step 1: Create an Extension

In the first step of using Foreign Data Wrapper, the CREATE EXTENSION command is used to load a new extension to a current database, and there must not be an extension of the same name already loaded.

It creates new SQL objects such as functions, data types, operators, and index support methods. The user who runs CREATE EXTENSION becomes the owner of the extension.

To create an extension to connect to MySQL, create the mysql_fdw extension to connect to the Postgres database from which you will query the MySQL server by inputting mysql_fdw as the extension_name

CREATE EXTENSION [IF NOT EXISTS] extension_name<em> </em>[WITH] [SCHEMA schema_name];

Here is the list of parameters used in this step:

  • IF NOT EXISTS: This is used to instruct the server to issue a notification to you instead of returning an error if an extension with the same name already exists.
  • extension_name: This is the name of the extension to be installed. As can be seen from the illustration above, mysql_fdw is used as the extension name.
  • schema_name: Use this to specify the schema’s name in which to install the extension’s object, and the named schema must already exist. An example of CREATE EXTENSION showing how to connect to MySQL using mysql_fdw is as follows:
CREATE EXTENSION<strong> </strong>mysql_fdw;

Step 2: Create a Server

The command CREATE SERVER defines a new foreign server and its connections. It gives the connection information that a Foreign Data Wrapper uses to access an external data resource.

The server is owned by the user who defines it, and its name must be unique within the database. An example of this is given below:

CREATE SERVER [ IF NOT EXISTS ] server_name[TYPE ‘server_type’]
FOREIGN DATA WRAPPER fdw_name
[OPTIONS (option ‘value’ [, …] ) ]

Here is the list of parameters used in this step:

  • IF NOT EXISTS: This is used to instruct the server to issue a notification to you instead of returning an error if an extension with the same name already exists.
  • server_name: This is used to identify the name of the foreign server to be created e.g., mysql_server.
  • server_type: This is an optional server type and may be helpful in Foreign Data Wrappers.
  • FOREIGN DATA WRAPPER fdw_name: This identifies the name of the Foreign Data Wrapper that manages the server. Use mysql_fdw when creating the server connection.
  • OPTIONS (option ‘value’ [, …] ): This is used to specify the options for the server as they clearly define the connection details of the server. They include the following:
    • host: Use this to input the address or hostname of the MySQL server; the default value is 127.0.0.1.
    • port: This is where you type the port number of the MySQL server. The default value is 3306.
    • secure_auth: This is to enable or disable secure authentication; the default value is true.
    • init_command: This is the SQL statement that executes when connecting to the MySQL Server.
    • ssl_key: This is the pathname of the client’s private key file.
    • ssl_cert: This is the name of the client public key certificate file.
    • ssl_ca: This is the name of the Certificate Authority (CA) certificate file. If this option is used, it must specify the same certificate used by the server.
    • ssl_capath: This is the directory’s name that contains the trusted SSL CA certificate files.
    • ssl_cipher: This is the list of permissible ciphers for SSL encryption.
    • use_remote_estimate: This is included to instruct the server to use EXPLAIN commands on the remote server when estimating processing cost; its default value is false.

An example of CREATE SERVER command for the MySQL server is shown below:

CREATE SERVER mysql_server FOREIGN DATA WRAPPER mysql_fdw [OPTIONS (host ‘127.0.0.1’, port ‘3306’ ) ;

Step 3: Create User Mapping

This defines a mapping that joins a Postgres role to a foreign server like a MySQL server. It covers the connection information that a Foreign Data Wrapper uses together with the information from the foreign server to access an external data source.

CREATE USER MAPPING [ IF NOT EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC }
SERVER server_name
[OPTIONS ( option ‘value’ [, …] ) ]

Here is the list of parameters used in this step:

  • <strong>IF NOT EXISTS</strong>: This is used to instruct the server to issue a notification to you instead of returning an error if a mapping of the given user to the foreign server already exists.
  • user_name: This is the existing user’s name mapped to the foreign server. USER and CURRENT_USER must match the name of the current user, and PUBLIC is used when public mapping is created with no user-specific mapping.
  • server_name: This indicates the server name for which the user mapping will be created.
  • OPTIONS ( option ‘value’ [, …] ): This specifies the options of the user mappings; they include:
    • <strong>username</strong>: The name of the user on the MySQL server.
    • <strong>password</strong>: The password associated with the username.

An example of CREATE USER MAPPING command for MySQL server below shows a user mapping named public with server name mysql_server. When connected to the server, it authenticates with a username ‘test’ and has ‘secret’ as a password:

CREATE USER MAPPING FOR public SERVER mysql_server OPTIONS ( username ‘test’, password ‘secret’ ) ;

Step 4: Create a Foreign Table

A foreign table acts as a pointer to a table that resides on the MySQL host. This means that you will create a foreign table definition on the Postgres Server that would connect to a table created on MySQL Server, where columns in the table will map to columns in the Postgres Server.

CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name( [ { column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation] [ column_constraint[ ... ] ]
    | table_constraint }
    [, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
  SERVER server_name
[ OPTIONS ( option 'value' [, ... ] ) ]
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name
  PARTITION OF parent_table [ (
  { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
    | table_constraint }
    [, ... ]
) ] partition_bound_spec
  SERVER server_name
[ OPTIONS ( option 'value' [, ... ] ) ]
where column_constraint is:
[ CONSTRAINT constraint_name ]
{ NOT NULL |
  NULL |
  CHECK ( expression ) [ NO INHERIT ] |
  DEFAULT default_expr |
  GENERATED ALWAYS AS ( generation_expr ) STORED }
and table_constraint is:
[ CONSTRAINT constraint_name ]
CHECK ( expression ) [ NO INHERIT ]

Here is the list of parameters used in this step:

  • <strong>IF NOT EXISTS</strong>: This is used to instruct the server to issue a notification to you instead of returning an error if a relation with the same name already exists.
  • table_name: This specifies the name of the foreign table and includes a schema name to specify the schema in which the foreign table would reside.
  • column_name: This is the column’s name to be created in the new table; each column should correspond to a column on the MySQL server.
  • data_type: This identifies the data type of the column. To have a smooth connection, specify the same data type for each column on the Postgres Server and the MySQL Server.
  • COLLATE collation: This is used to assign a collation to the column and must be of a collatable data type. If this is not specified, the column data type’s default collation is used.
  • INHERITS ( parent_table[, ... ] ): The INHERITS clause is used to specify a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables.
  • PARTITION OF parent_table FOR VALUES partition_bound_spec: This is a form used to create the foreign table as a partition of the parent table with allocated bound values.
  • CONSTRAINT constraint_name: This is used to specify an optional name for a column or table constraint, and if not specified, the server generates a constraint name automatically.
  • NOT NULL: This simply means that the column is not permitted to have null values.
  • NULL: This simply means that the column is allowed to have null values, which is the default value.
  • CHECK (expression) [NO INHERIT]: The CHECK clause is used to specify an expression that produces a Boolean result that each row in a table must obey. A CHECK constraint specified as a column constraint references the column values only, whereas an expression of a table constraint references multiple tables.
  • DEFAULT default_expr: The DEFAULT clause is used to specify a default data value for the column whose column definition appears within. The data type of the default expression must match that of the data type in the column.
  • GENERATED ALWAYS AS ( generated_expr )STORED: This clause is used to create the column as a generated column, it can not be written to, and when it is read, it returns the result of the specified expression.
  • server_name: This is the name of an existing foreign server that will use the foreign table. If connecting to MySQL to query a table that resides in the MySQL file system, ensure that the SERVER and the server_name of the foreign server that uses the MySQL Data Adapter are included.
  • OPTIONS (option ‘value’ [, …] ): These options would be associated with the new foreign table or one of its columns and are specific to each Foreign Data Wrapper. For connection to MySQL, specify the following options:
    • dbname: The name of the database on the MySQL server.
    • table_name: This is the table’s name on the MySQL server; the default value would be the foreign table’s name.
    • max_blob_size: This is used to specify the maximum blob size to read without truncation.

Below is an example of a syntax that shows how a table is created on a Postgres host that maps the columns of a MySQL table to the columns of a Postgres table for data stored on a MySQL Server. The first is for a MySQL table and has the following definitions:

CREATE TABLE warehouse (
warehouse_id      INT PRIMARY KEY,
warehouse_name    TEXT,
  warehouse_created TIMESTAMP);

For the Postgres Server, execute the following command to create a comparable table on it:

CREATE FOREIGN TABLE warehouse
(
  warehouse_id      INT,
  warehouse_name    TEXT,
  warehouse_created TIMESTAMP
)
SERVER mysql_server
         OPTIONS (dbname 'db', table_name 'warehouse');

Step 5: Import Foreign Schema

The IMPORT FOREIGN SCHEMA command is used to import table definitions on the Postgres Server from a foreign server, MySQL Server. It creates foreign tables that represent already existing tables on a foreign server. 

It usually imports all tables and views existing in a particular schema on the foreign server. However, you can specify a list of tables limited to a subset or even exclude specific tables.

IMPORT FOREIGN SCHEMA remote_schema
    [ { LIMIT TO | EXCEPT } ( table_name [, ...] ) ]
    FROM SERVER server_name
    INTO local_schema
    [ OPTIONS ( option 'value' [, ... ] ) ]

Here is the list of parameters used in this step: 

  • remote_schema: This is used to specify the remote schema to import from, which will be the MySQL database in this article.
  • LIMIT TO ( Table_name [, …] ): This option will allow you to limit the list of tables to a specified subset to be imported. All views and tables on the foreign server (MySQL host) are imported except specified otherwise.
  • EXCEPT ( table_name [, …] ): This option excludes specified foreign tables from an import. All the existing tables will be imported except the ones listed on it, as they would not be imported from the MySQL host.
  • SERVER server_name: This is used to indicate the server from which the foreign tables will be imported.
  • local_schema: This is used to specify the name of the local schema where the imported foreign tables will be created.
  • <strong>OPTIONS</strong> ( <em>option ‘value’ </em> [, …] ): This is used for specific options during import. You can select the following:
    • import_default: This controls whether column default expressions will be included in the definitions of importing the foreign tables from the foreign server. Its default value is false.
    • import_not_null: It controls if column NOT NULL constraints should be included in the definitions of importing the foreign tables from the foreign server. The default value is true.

An example of syntax showing an IMPORT FOREIGN SCHEMA is shown below. First, the syntax is for a MySQL table created with the following definition:

CREATE TABLE color(<strong><em>cid</em></strong> INT PRIMARY KEY, <strong><em>cname </em></strong>TEXT);
INSERT INTO color VALUES (<strong><em>1, 'Red'</em></strong>);
INSERT INTO color VALUES (<strong><em>2, 'Green'</em></strong>);
INSERT INTO color VALUES (<strong><em>3, 'Orange'</em></strong>);
CREATE TABLE fruit(<strong><em>fid</em></strong> INT PRIMARY KEY, <strong><em>fname </em></strong>TEXT);
INSERT INTO fruit VALUES (<strong><em>1, 'Orange'</em></strong>);
INSERT INTO fruit VALUES (<strong><em>2, 'Mango'</em></strong>);

On the Postgres server, execute the following command:

IMPORT FOREIGN SCHEMA edb FROM SERVER mysql_server INTO public;
SELECT * FROM color;
 cid | cname
 -----+--------
   1 | Red
   2 | Green
   3 | Orange
 (3 rows)
SELECT * FROM fruit;
 fid | fname
 -----+--------
   1 | Orange
   2 | Mango
 (2 rows)

Method 3: Using Hevo Data

Step 1: Configure your MySQL Source

Connect Hevo Data with your chosen data source, MySQL. You need to specify a few settings to configure this: the pipeline name, database host, name, port, user, password, and name.

MySQL to PostgreSQL - Configure MySQL as a Source | Hevo Data

Step 2: Configure PostgreSQL as your destination

Provide PostgreSQL database credentials, such as your authorized PostgreSQL account, along with a name for your database, destination, and project Id.

Configure PostgreSQL as a Destination

You have now successfully established a connection between MySQL and PostgreSQL.

Post-Migration Testing & Validation

  • Data Integrity:
    • Compare source (MySQL) and target (PostgreSQL) data meticulously.
    • Verify constraints, data types, and business rules.
  • Performance:
    • Benchmark key workloads.
    • Monitor resource usage.
    • Compare performance with MySQL.
  • Application Testing:
    • Integrate and test applications thoroughly.
    • Conduct user acceptance testing (UAT).
  • Disaster Recovery:
    • Test backup/restore procedures.
    • Validate disaster recovery plan.
  • Continuous Monitoring:
    • Monitor performance, identify anomalies.
    • Conduct regular integrity checks.
  • Error Handling:
    • Configure robust error logging.
    • Establish error handling procedures.
  • Backup Before Migration:
    • Back up MySQL before migration to prevent data loss.

Additional Resources

Performance Optimization Techniques

  • Indexing

In PostgreSQL, indexing enhances query performance by allowing rapid data retrieval. They work by creating a sorted data structure that allows the database engine to locate the rows that satisfy a WHERE clause quickly. Some common index types include B-tree for equality and range queries, GIN for full-text search, and GiST for complex data types. 

  • Query Optimization

Query optimization in PostgreSQL involves simplifying queries, using appropriate joins, and ensuring efficient use of indexes. Tools like ‘EXPLAIN’ help understand query execution plans and identify areas for improvement.

For example, Using SELECT * FROM EMP WHERE ID=8; would retrieve the entire row where ID is found to be 8.

However, if we want to retrieve only certain information like name and address, then it is advisable to use 

SELECT NAME, ADDRESS FROM EMP WHERE ID=8;

  • Normalization and Denormalization

Normalization in PostgreSQL involves organizing data into tables and reducing redundancy by splitting large tables into smaller ones to maintain data integrity and minimize storage space. It aims to eliminate data anomalies and ensure each table serves a single purpose. 

On the other hand, denormalization involves combining tables to optimize query performance by reducing joins and improving data retrieval speed. It trades off some redundancy for faster read operations and is often used in read-heavy applications or for analytical purposes.

  • Partitioning

Partitioning in PostgreSQL involves dividing large tables into smaller, more manageable chunks called partitions based on specific criteria such as ranges or lists of values. It enables operations to target only relevant partitions, which enhances query performance and maintenance.

  • Caching

Additionally, PostgreSQL supports query-level caching with techniques like prepared statements. Function result caching can be achieved through custom implementations or third-party extensions to enhance performance for repetitive queries.

Conclusion

Migrating MySQL data to PostgreSQL can unlock significant benefits, including enhanced performance, scalability, and access to advanced features like JSON support and full-text search. While tools like pgloader offer a command-line approach to simplify data migration, Hevo Data provides a more streamlined solution.

By leveraging Hevo, businesses can automate the entire data replication process, eliminating the need for manual scripts and reducing the risk of errors. Hevo enables real-time or near real-time data transfer, ensuring that the PostgreSQL database always has the latest information. Sign up for Hevo’s 14-day free trial and experience its robust infrastructure and user-friendly interface.

Database Replication: Exploring 3 Methods and Choosing the Best Fit
Download Your Free EBook Now

FAQs

Can MySQL connect to Postgres?

Yes, MySQL and PostgreSQL can communicate and interact with each other, but they do not connect directly in the same way that databases typically connect to applications or other databases.

Can I use PostgreSQL if I know MySQL?

Yes, if you are familiar with MySQL, you can relatively easily transition to using PostgreSQL.

Can I use PostgreSQL instead of MySQL?

Yes, you can use PostgreSQL as a replacement for MySQL, depending on your specific needs and preferences.

Is PostgreSQL more secure than MySQL?

PostgreSQL is the most security-enabled database available.

Ofem Eteng
Technical Content Writer, Hevo Data

Ofem Eteng is a seasoned technical content writer with over 12 years of experience. He has held pivotal roles such as System Analyst (DevOps) at Dagbs Nigeria Limited and Full-Stack Developer at Pedoquasphere International Limited. He specializes in data science, data analytics and cutting-edge technologies, making him an expert in the data industry.