How to Migrate from MySQL to PostgreSQL Without Data Loss
Published April 23, 2026 ยท 12 min read
Moving from MySQL to PostgreSQL is one of the most common database migrations in modern software engineering. Teams switch for richer data types, stricter ACID compliance, better JSON support, or simply because their ORM defaults to PostgreSQL. But migrations are risky. A single mis-mapped type or skipped constraint can corrupt data silently. This guide walks you through a safe, reversible migration from MySQL to PostgreSQL โ schema first, data second, verification always.
Phase 1: Export your MySQL schema
Start with a clean schema dump. Avoid dumping data at this stage โ you only need the structure:
mysqldump --no-data --routines --events \
-u root -p mydatabase > mysql_schema.sql
The --no-data flag ensures you get only CREATE TABLE, CREATE INDEX, and routine definitions. The --routines and --events flags include stored procedures and triggers, which you'll need to evaluate for PostgreSQL compatibility.
Phase 2: Convert the schema to PostgreSQL
This is where most migrations fail. MySQL and PostgreSQL speak different dialects. A naive copy-paste of MySQL DDL into PostgreSQL will throw syntax errors and create broken tables. Here is the type mapping you need:
| MySQL | PostgreSQL | Notes |
|---|---|---|
INT |
INTEGER |
Direct equivalent |
BIGINT |
BIGINT |
Direct equivalent |
VARCHAR(n) |
VARCHAR(n) |
Direct equivalent |
TEXT |
TEXT |
Direct equivalent |
DATETIME |
TIMESTAMP |
Use TIMESTAMPTZ if you need timezone awareness |
TINYINT(1) |
BOOLEAN |
MySQL uses 0/1; PostgreSQL uses true/false |
FLOAT |
REAL |
Both are 32-bit IEEE floats |
DOUBLE |
DOUBLE PRECISION |
Both are 64-bit IEEE floats |
DECIMAL(p,s) |
NUMERIC(p,s) |
Direct equivalent |
JSON |
JSONB |
JSONB is indexed and decomposed; preferred in PostgreSQL |
ENUM(...) |
CREATE TYPE ... AS ENUM |
PostgreSQL enums are proper types |
AUTO_INCREMENT |
SERIAL or GENERATED ALWAYS AS IDENTITY |
Identity columns are the modern standard |
Beyond types, you need to rewrite these MySQL-isms:
- Backtick quoting โ Double quotes. MySQL uses
`table_name`; PostgreSQL uses"table_name". - CHARACTER SET and COLLATE โ Remove them. PostgreSQL uses database-level encoding. Collations are specified differently.
- ENGINE=InnoDB โ Remove it. PostgreSQL has no table-level engine clause.
- UNSIGNED โ Remove it. PostgreSQL does not support UNSIGNED. Use CHECK constraints if you need non-negative values:
CHECK (column >= 0). - ON UPDATE CURRENT_TIMESTAMP โ Replace with a trigger. PostgreSQL does not support this column attribute.
Phase 3: Compare schemas before creating tables
Before you run a single CREATE TABLE in PostgreSQL, compare your converted schema against the original MySQL schema. This catches mapping errors early โ when they are cheap to fix.
Paste your MySQL schema into SchemaLens with MySQL selected, paste your converted PostgreSQL schema with PostgreSQL selected, and run the comparison. SchemaLens will show you:
- Columns that changed type (e.g., TINYINT(1) โ BOOLEAN)
- Constraints that were lost in translation
- Indexes that are missing or renamed
- Default values that differ
Fix every discrepancy before moving to data migration. A schema diff now saves hours of data recovery later.
Phase 4: Export data from MySQL
Use mysqldump with PostgreSQL-compatible options, or export to CSV for bulk loading:
# Option A: SQL dump (needs manual editing for syntax)
mysqldump --no-create-info --complete-insert --extended-insert=FALSE \
-u root -p mydatabase > mysql_data.sql
# Option B: CSV export per table (recommended for large datasets)
SELECT * INTO OUTFILE '/tmp/users.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM users;
For CSV export, use PostgreSQL's COPY command for fast bulk loading:
COPY users(id, email, created_at)
FROM '/tmp/users.csv'
WITH (FORMAT csv, HEADER true);
Phase 5: Handle boolean and binary data carefully
Two data types are notorious for silent corruption during MySQL โ PostgreSQL migrations:
Booleans: MySQL stores booleans as TINYINT(1) with values 0 and 1. PostgreSQL expects true/false or 't'/'f'. If you import 0/1 into a BOOLEAN column, PostgreSQL will accept it โ but some drivers and ORMs behave differently. Explicitly convert during import:
-- Convert 0/1 to boolean during COPY
ALTER TABLE users ALTER COLUMN is_active TYPE BOOLEAN
USING CASE WHEN is_active = 1 THEN true ELSE false END;
Binary data (BLOB): MySQL BLOB maps to PostgreSQL BYTEA. Encoding differences between hex and escape formats can corrupt images and files. Use BYTEA with the hex format (default in modern PostgreSQL) and verify a few files after import.
Phase 6: Migrate stored routines and triggers
MySQL and PostgreSQL stored procedure languages are incompatible. MySQL uses SQL/PSM; PostgreSQL uses PL/pgSQL. You cannot automatically translate them โ they must be rewritten.
My recommendation: do not migrate stored procedures in the same deployment as the schema migration. Rewrite them as application code first, or schedule a separate migration sprint for complex routines. This keeps the critical path (schema + data) clean and reversible.
Phase 7: Verify everything
Verification is non-negotiable. Run these checks before cutting over traffic:
- Row counts match:
SELECT COUNT(*) FROM tableon both sides for every table. - Checksum sample data: Hash a few columns on random rows to confirm values are identical.
- Test constraints: Try inserting invalid data (duplicate PK, NULL in NOT NULL column) and confirm PostgreSQL rejects it.
- Run application tests: Point your test suite at the PostgreSQL instance. Every query should pass.
- Performance baseline: Run your slowest queries and compare execution times. Missing indexes are the most common post-migration performance regression.
Phase 8: The cutover
Never do an in-place cutover on a Friday. Use one of these patterns:
Blue-green cutover (recommended):
- Keep MySQL as the primary database.
- Replicate writes to PostgreSQL in real time using a tool like
pg_chameleonorDebezium. - Switch read traffic to PostgreSQL first.
- Switch write traffic after 24-48 hours of clean reads.
- Keep MySQL on standby for 1 week as a rollback option.
Maintenance window cutover:
- Stop application writes.
- Export final MySQL data delta.
- Import delta into PostgreSQL.
- Update application connection strings.
- Resume traffic.
Common pitfalls to avoid
- Assuming NULL behavior is identical: In MySQL,
NULL = NULLevaluates to NULL. In PostgreSQL, it also evaluates to NULL โ but some edge cases in GROUP BY and DISTINCT differ. Test carefully. - Forgetting to re-create sequences: If you used
SERIALorIDENTITY, ensure sequence start values are set above the maximum imported ID. Otherwise inserts will fail with duplicate key errors. - Case sensitivity: MySQL's
VARCHARcomparison is case-insensitive by default on many collations. PostgreSQL is case-sensitive unless you useILIKEor a case-insensitive collation. Update queries that rely on case folding. - String length limits: MySQL
VARCHAR(255)counts characters in some collations and bytes in others. PostgreSQLVARCHAR(255)counts characters. Multi-byte strings that fit in MySQL may exceed the limit in PostgreSQL.
Making migration safe with SchemaLens
SchemaLens reduces migration risk by giving you a semantic diff of your schemas before you move data. Paste your original MySQL schema and your converted PostgreSQL schema, and SchemaLens highlights every difference that matters โ types, defaults, constraints, and indexes. No more guessing whether your migration script is correct.
Compare MySQL and PostgreSQL schemas โ
Related reading: