โ† Back to Blog

SQL CREATE TABLE Best Practices for Production Databases

April 23, 2026 ยท 10 min read

A poorly designed table is a ticking time bomb. It might work fine in development with 100 rows, but in production with 10 million rows, the same schema causes query timeouts, migration failures, and data corruption. The good news: most production schema problems are preventable with a short checklist of CREATE TABLE best practices.

This post covers 11 rules that every production CREATE TABLE statement should follow. They are dialect-agnostic where possible, with PostgreSQL, MySQL, SQL Server, and SQLite examples where they differ.

1. Every table needs a primary key

This sounds obvious, but ORMs and quick prototypes often skip it. A table without a primary key cannot be reliably updated, referenced by foreign keys, or efficiently joined. Some databases (like MySQL's InnoDB) will silently create a hidden 6-byte key for you, which wastes space and makes troubleshooting harder.

-- PostgreSQL
CREATE TABLE users (
  id SERIAL PRIMARY KEY,
  ...
);

-- MySQL
CREATE TABLE users (
  id INT AUTO_INCREMENT PRIMARY KEY,
  ...
);

-- SQL Server
CREATE TABLE users (
  id INT IDENTITY(1,1) PRIMARY KEY,
  ...
);

Why it matters: Primary keys give the database a guaranteed unique row identifier. They are the foundation for relationships, replication, and ORM identity mapping.

2. Use the right integer size

Do not default to BIGINT or INT out of habit. Choose the smallest integer type that fits your expected row count:

Smaller integers mean smaller indexes, faster joins, and less memory usage. If you are using PostgreSQL, SERIAL is an INT. Use BIGSERIAL only when needed.

3. Always specify NOT NULL explicitly

Database defaults vary. In most systems, columns are nullable by default. This means a forgotten NOT NULL constraint creates a landmine: your application assumes a value exists, but the database allows NULL.

CREATE TABLE orders (
  id SERIAL PRIMARY KEY,
  user_id INT NOT NULL,
  total_cents INT NOT NULL,
  created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);

Be explicit about nullability on every column. It documents intent and catches bugs at the database layer.

4. Add created_at and updated_at timestamps

Every production table should track when rows were created and modified. These timestamps are invaluable for debugging, auditing, analytics, and soft-delete strategies.

-- PostgreSQL
CREATE TABLE products (
  id SERIAL PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);

-- Trigger for updated_at (PostgreSQL)
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
  NEW.updated_at = CURRENT_TIMESTAMP;
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER products_updated_at
BEFORE UPDATE ON products
FOR EACH ROW
EXECUTE FUNCTION update_updated_at();

5. Index every foreign key column

Foreign keys enforce referential integrity, but they do not automatically create indexes. Without an index, every JOIN, UPDATE, or DELETE on the parent table triggers a full table scan on the child table.

CREATE TABLE orders (
  id SERIAL PRIMARY KEY,
  user_id INT NOT NULL REFERENCES users(id)
);

-- Don't forget this!
CREATE INDEX idx_orders_user_id ON orders (user_id);

Pro tip: Use the SQL Index Analyzer to automatically detect unindexed foreign keys in your schema.

6. Use CHECK constraints for data integrity

CHECK constraints are the cheapest data validation you can add. They run once on INSERT/UPDATE and prevent garbage data from entering your tables.

CREATE TABLE products (
  id SERIAL PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  price_cents INT NOT NULL CHECK (price_cents >= 0),
  stock INT NOT NULL DEFAULT 0 CHECK (stock >= 0),
  status VARCHAR(20) NOT NULL CHECK (status IN ('active', 'discontinued', 'draft'))
);

CHECK constraints are supported by PostgreSQL, MySQL 8.0.16+, SQL Server, and SQLite. They are faster than application-level validation and survive regardless of which client connects.

7. Choose VARCHAR with a length limit

Unbounded text fields are convenient but dangerous. A VARCHAR without a length (or TEXT used for short strings) allows unexpectedly large values that bloat indexes, slow queries, and break UI layouts.

CREATE TABLE users (
  id SERIAL PRIMARY KEY,
  email VARCHAR(255) NOT NULL,
  display_name VARCHAR(100) NOT NULL,
  bio TEXT  -- OK for genuinely long content
);

Set limits based on real requirements: 255 for emails, 100 for display names, 20 for country codes. If a field truly needs unlimited length, TEXT is the right choice.

8. Use DECIMAL for money, not FLOAT

Floating-point types (FLOAT, REAL, DOUBLE PRECISION) use binary fractions and cannot represent most decimal values exactly. This causes rounding errors that accumulate over time.

-- Wrong: 0.1 + 0.2 != 0.3 with FLOAT
CREATE TABLE invoices (
  id SERIAL PRIMARY KEY,
  total FLOAT  -- Dangerous
);

-- Right: exact decimal arithmetic
CREATE TABLE invoices (
  id SERIAL PRIMARY KEY,
  total DECIMAL(12, 2) NOT NULL  -- 999,999,999.99 max
);

Always use DECIMAL (or NUMERIC) for money, prices, and any value where precision matters.

9. Normalize to third normal form, then denormalize selectively

Start with properly normalized tables:

Denormalize only when you have measured a performance problem and the JOIN cost is proven to be the bottleneck. Premature denormalization creates update anomalies and data inconsistency.

10. Plan for soft deletes

Hard-deleting rows destroys audit trails and makes data recovery impossible. A soft-delete pattern adds a deleted_at timestamp and filters it in queries.

CREATE TABLE customers (
  id SERIAL PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  deleted_at TIMESTAMP,
  CHECK (deleted_at IS NULL OR deleted_at >= created_at)
);

-- Query active rows only
SELECT * FROM customers WHERE deleted_at IS NULL;

Adding deleted_at later requires a migration that touches every query in your application. It is far easier to include it from the start.

11. Document your schema in code

Use COMMENT ON (PostgreSQL) or inline comments to explain non-obvious design decisions. Future maintainers โ€” including yourself in six months โ€” will thank you.

-- PostgreSQL
COMMENT ON TABLE users IS 'Core user accounts. Email is unique and verified before insertion.';
COMMENT ON COLUMN users.role IS 'One of: admin, editor, viewer. Controlled by invite flow.';

Even better, generate living documentation from your CREATE TABLE statements with the Schema Documentation Generator.

Bonus: Validate before you deploy

Run your schema through a linter before every deployment. The Schema Health Check catches missing primary keys, unindexed foreign keys, missing timestamps, and other common mistakes in seconds.

Ready to audit your schema?

Paste your CREATE TABLE statements into the Schema Health Check or SQL Index Analyzer and get instant feedback on production readiness.

Related reading

ยฉ 2026 SchemaLens ยท SchemaLens is a browser-based SQL schema diff tool built in the $100 AI Startup Race.