Production database schemas need more than just tables and foreign keys — they need audit trails to answer “who changed what and when,” soft delete to recover accidentally removed data, optimistic locking to handle concurrent updates without data corruption, and sometimes multi-tenancy to serve multiple organisations from a single database. These patterns recur across virtually every serious web application, and PostgreSQL supports all of them elegantly. Understanding these patterns before building the FastAPI application in Part 3 means you will implement them correctly from the start rather than retrofitting them later.
Audit Logging
-- ── Option 1: timestamp columns on every table ───────────────────────────────
-- Simple but limited — only tracks when, not who or what changed
ALTER TABLE posts ADD COLUMN created_by BIGINT REFERENCES users(id);
ALTER TABLE posts ADD COLUMN updated_by BIGINT REFERENCES users(id);
ALTER TABLE posts ADD COLUMN created_at TIMESTAMPTZ NOT NULL DEFAULT NOW();
ALTER TABLE posts ADD COLUMN updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW();
-- ── Option 2: dedicated audit_log table ──────────────────────────────────────
-- Tracks every change to every audited table
CREATE TABLE audit_log (
id BIGSERIAL PRIMARY KEY,
table_name TEXT NOT NULL,
record_id BIGINT NOT NULL,
action TEXT NOT NULL CHECK (action IN ('INSERT', 'UPDATE', 'DELETE')),
changed_by BIGINT REFERENCES users(id),
changed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
old_data JSONB, -- previous row data (NULL for INSERT)
new_data JSONB -- new row data (NULL for DELETE)
);
CREATE INDEX idx_audit_table_record ON audit_log(table_name, record_id);
CREATE INDEX idx_audit_changed_at ON audit_log(changed_at DESC);
CREATE INDEX idx_audit_changed_by ON audit_log(changed_by);
-- Trigger function to populate audit_log automatically
CREATE OR REPLACE FUNCTION audit_log_trigger()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO audit_log (table_name, record_id, action, old_data, new_data)
VALUES (
TG_TABLE_NAME,
COALESCE(NEW.id, OLD.id),
TG_OP,
CASE WHEN TG_OP = 'DELETE' OR TG_OP = 'UPDATE' THEN row_to_json(OLD)::JSONB END,
CASE WHEN TG_OP = 'INSERT' OR TG_OP = 'UPDATE' THEN row_to_json(NEW)::JSONB END
);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
-- Apply to posts table
CREATE TRIGGER posts_audit
AFTER INSERT OR UPDATE OR DELETE ON posts
FOR EACH ROW EXECUTE FUNCTION audit_log_trigger();
changed_by column is NULL unless your application sets a session variable that the trigger reads). For that level of attribution, set SET LOCAL app.current_user_id = :user_id at the start of each database session and read it in the trigger with current_setting('app.current_user_id', TRUE).WHERE deleted_at IS NULL filter. Name the view the same as the original table and rename the original: ALTER TABLE users RENAME TO users_all; CREATE VIEW users AS SELECT * FROM users_all WHERE deleted_at IS NULL;. Application code querying users automatically gets only non-deleted rows. This pattern eliminates a class of bugs where a developer forgets the soft-delete filter.Soft Delete
-- Add soft delete column
ALTER TABLE posts ADD COLUMN deleted_at TIMESTAMPTZ;
ALTER TABLE posts ADD COLUMN deleted_by BIGINT REFERENCES users(id);
-- Soft delete: mark as deleted
UPDATE posts
SET deleted_at = NOW(), deleted_by = :user_id
WHERE id = :post_id
RETURNING id, deleted_at;
-- Restore (undo soft delete)
UPDATE posts
SET deleted_at = NULL, deleted_by = NULL
WHERE id = :post_id;
-- Query active posts (most common query)
SELECT id, title, status FROM posts
WHERE deleted_at IS NULL
ORDER BY created_at DESC;
-- Partial unique index: unique slug among non-deleted posts
CREATE UNIQUE INDEX idx_posts_slug_nondeleted
ON posts(slug)
WHERE deleted_at IS NULL;
-- Hard delete old soft-deleted records (data retention policy)
DELETE FROM posts
WHERE deleted_at < NOW() - INTERVAL '90 days';
Optimistic Locking
-- Optimistic locking: detect concurrent modifications without holding DB locks
-- Use a version counter that increments on every update
ALTER TABLE posts ADD COLUMN version INTEGER NOT NULL DEFAULT 1;
-- Read: client receives the current version with the data
SELECT id, title, body, version FROM posts WHERE id = 42;
-- Client gets: {id: 42, title: "Hello", body: "...", version: 3}
-- Update: only proceed if version matches what client read
UPDATE posts
SET title = 'Updated Title',
body = 'New body',
version = version + 1, -- increment version
updated_at = NOW()
WHERE id = 42
AND version = 3 -- fails if another request changed it first
RETURNING id, version;
-- Returns row: success (version is now 4)
-- Returns empty: version was not 3 (another update happened since we read)
-- FastAPI handler logic:
-- result = db.execute(update_query)
-- if result.rowcount == 0:
-- raise HTTPException(409, "Post was modified by another user. Reload and try again.")
Multi-Tenancy Patterns
-- ── Row-Level Security (RLS) for multi-tenancy ────────────────────────────────
-- Each row has an organisation_id; users only see their org's data
-- Add organisation context to tables
ALTER TABLE posts ADD COLUMN org_id BIGINT NOT NULL REFERENCES organisations(id);
ALTER TABLE users ADD COLUMN org_id BIGINT NOT NULL REFERENCES organisations(id);
-- Enable Row Level Security on posts
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
-- Policy: users can only see posts from their organisation
CREATE POLICY posts_org_isolation ON posts
USING (org_id = current_setting('app.current_org_id')::BIGINT);
-- Set org context at session start (in FastAPI dependency)
-- SET LOCAL app.current_org_id = :org_id
-- Now all queries automatically filter by org:
SELECT * FROM posts; -- returns only posts where org_id matches session setting
-- No WHERE clause needed — RLS adds it automatically!
-- Bypass RLS for admin operations
SET LOCAL row_security = off; -- turns off RLS for this session
-- Or use a superuser role that bypasses RLS (BYPASSRLS)
Common Mistakes
Mistake 1 — Not implementing data retention on audit tables
❌ Wrong — audit table grows forever:
-- After 2 years: audit_log has 50M rows consuming 20GB of disk
✅ Correct — schedule regular archival or use table partitioning with monthly partitions.
Mistake 2 — Forgetting WHERE deleted_at IS NULL on every query
❌ Wrong — shows deleted records to users:
SELECT * FROM posts WHERE author_id = 1; -- includes soft-deleted posts!
✅ Correct — always filter:
SELECT * FROM posts WHERE author_id = 1 AND deleted_at IS NULL;
-- Or use a view that adds the filter automatically ✓
Mistake 3 — Optimistic lock version not checked, allowing overwrite
❌ Wrong — last writer wins (loses concurrent changes):
db.execute("UPDATE posts SET title = ? WHERE id = ?", (new_title, post_id))
# No version check — overwrites any concurrent changes silently!
✅ Correct — check version, return 409 on mismatch:
result = db.execute("UPDATE posts SET title=?, version=version+1 WHERE id=? AND version=?",
(new_title, post_id, client_version))
if result.rowcount == 0:
raise HTTPException(409, "Concurrent modification detected") # ✓
Quick Reference
| Pattern | Key Columns | Main Use |
|---|---|---|
| Audit timestamps | created_at, updated_at, created_by, updated_by |
Basic change tracking |
| Full audit log | Separate table with old/new JSONB | Who changed what, full history |
| Soft delete | deleted_at TIMESTAMPTZ, deleted_by BIGINT |
Recoverable deletion |
| Optimistic locking | version INTEGER NOT NULL DEFAULT 1 |
Concurrent update detection |
| Row-Level Security | org_id + RLS POLICY |
Multi-tenant data isolation |