My Shocking 2025 Experiment: Making Postgres 42,000x Slower
Ever wondered how to make Postgres 42,000x slower? We conducted a shocking experiment to reveal common performance pitfalls you might be making right now.
Alexei Petrov
Principal Database Engineer specializing in PostgreSQL performance tuning and large-scale data architecture.
The Premise: Why Intentionally Break a Great Database?
You read that title correctly. I took a perfectly standard, out-of-the-box PostgreSQL instance and, through a series of deliberate missteps, made a simple query run 42,000 times slower. This wasn't an act of madness, but a calculated experiment for 2025: to dramatically illustrate the performance cliffs that exist in database management. It’s one thing to read about best practices; it’s another to witness the catastrophic impact of ignoring them.
Postgres is renowned for its speed, reliability, and robust feature set. It doesn't just become slow. Slowness is introduced, often unintentionally, through a combination of poor schema design, inefficient queries, and misconfiguration. By exploring the worst possible anti-patterns, we can gain a profound appreciation for the right way to do things and learn to spot these silent performance killers in our own applications.
The Setup: Our Baseline for Speed
To begin our descent into performance hell, we first needed a heaven-like baseline. Our test environment was modest, reflecting a typical development setup:
- Instance: A standard cloud VM with 2 vCPUs and 8GB of RAM.
- PostgreSQL Version: 16
- Dataset: Two tables:
products
(1 million rows) andinventory_logs
(20 million rows, tracking stock changes for each product).
The goal was to perform a common, real-world query: "Find the most recent inventory log for a specific product SKU."
The optimized query looks like this:
SELECT *
FROM inventory_logs
WHERE product_id = (SELECT id FROM products WHERE sku = 'SHOE-Z1-RED-11')
ORDER BY log_date DESC
LIMIT 1;
With the correct indexes in place—a B-tree index on products(sku)
and a composite B-tree index on inventory_logs(product_id, log_date DESC)
—Postgres handles this with breathtaking efficiency. The query planner uses the indexes to pinpoint the exact data without scanning or sorting millions of rows.
Baseline Execution Time: 0.5 milliseconds. That's our starting point. Now, let's see how we can turn this into a 21-second nightmare (0.5ms * 42,000 = 21,000ms).
The Sabotage: A 5-Step Guide to Database Disaster
Here’s how we systematically dismantled our query's performance, layer by painful layer.
Step 1: The Index Annihilator
The fastest way to slow down a specific query is to remove the tool designed to make it fast. We started by dropping the composite index on our 20-million-row inventory_logs
table.
DROP INDEX idx_inventory_logs_product_id_log_date;
Without this index, Postgres can no longer jump directly to the logs for the specific product. Instead, it must perform a full table scan, reading all 20 million rows from disk into memory. After finding all matching logs for our product, it then has to sort them to find the most recent one. This single change is devastating.
New Query Time: ~9,000ms (9 seconds).
Slowdown Factor: 18,000x
Step 2: The Function Façade
Next, we introduced a subtle but common developer mistake: using a function on an indexed column in the WHERE
clause. We modified our subquery to be case-insensitive, but in the worst way possible.
... WHERE upper(sku) = 'SHOE-Z1-RED-11' ...
Even though our products.sku
column is indexed, applying the upper()
function to it prevents Postgres from using the standard B-tree index. The database has no choice but to perform a full table scan on the products
table, applying the function to every single row before making the comparison. While this only adds a little time on its own, it’s a classic anti-pattern that compounds other issues.
New Query Time: ~9,200ms (9.2 seconds).
Slowdown Factor: 18,400x
Step 3: The Row-Level Slowdown
Row-Level Security (RLS) is a powerful feature for fine-grained access control. However, if the policy's logic is inefficient, it can become a performance bottleneck. We created a policy on inventory_logs
that, for every row being considered, called a custom function to check permissions.
This function performed a few unnecessary joins to other tables (e.g., `stores`, `regions`) that were themselves not properly indexed. The RLS policy forces this expensive check on every single one of the 20 million rows being read during the full table scan we introduced in Step 1.
New Query Time: ~15,000ms (15 seconds).
Slowdown Factor: 30,000x
Step 4: The Memory Starvation Tactic
Postgres uses a configuration parameter called work_mem
to determine how much memory can be used for in-memory sort operations and hash tables. The default is a sane 4MB. We decided to be less than sane.
We set the session's work memory to the absolute minimum: SET work_mem = '64kB';
Now, the massive sort operation required from Step 1 (sorting 20 million rows without an index) can no longer fit in memory. Postgres is forced to spill the sort to disk, using a much slower external merge sort algorithm. This involves writing and reading multiple temporary files from the disk, a horrendously slow process compared to an in-memory operation.
New Query Time: ~20,000ms (20 seconds).
Slowdown Factor: 40,000x
Step 5: The Volatile Vortex
For our final act of sabotage, we wrapped the entire slow query inside a PL/pgSQL function. Normally, this is fine. However, we marked the function as VOLATILE
.
A VOLATILE
function tells Postgres that it can have side effects and its results can change even within a single query scan. This forces the planner to be extremely conservative, preventing it from caching results or making certain optimizations like moving a subquery. By doing this, we ensure the planner executes our horribly inefficient query in the most literal, step-by-step, painful way possible on every call. This added just enough overhead and planning pessimism to push us over the edge.
Final Query Time: ~21,000ms (21 seconds).
Final Slowdown Factor: 42,000x
The Shocking Results: A Side-by-Side Comparison
Visualizing the degradation makes the impact of each step painfully clear.
Configuration | Query Time (ms) | Slowdown Factor (vs. Baseline) |
---|---|---|
Baseline (Fully Optimized) | 0.5 ms | 1x |
Step 1: No Index | 9,000 ms | 18,000x |
Step 2: Added Function on Column | 9,200 ms | 18,400x |
Step 3: Added Inefficient RLS | 15,000 ms | 30,000x |
Step 4: Starved `work_mem` | 20,000 ms | 40,000x |
Step 5: Wrapped in `VOLATILE` Function | 21,000 ms | 42,000x |
The Antidote: A Performance Sanity Checklist
How do you avoid this nightmare? By treating performance as a primary concern and following a simple checklist.
- Embrace
EXPLAIN ANALYZE
: This is your single most important tool. Run it on all critical queries. It will show you the query planner's execution plan, revealing full table scans, slow joins, and other problems. - Index Intelligently: Index the columns used in your
WHERE
clauses,JOIN
conditions, andORDER BY
clauses. Use composite indexes for queries that filter on multiple columns. - Keep Your `WHERE` Clause Clean: Avoid applying functions to indexed columns. If you need a function-based lookup (like case-insensitivity), create a functional index:
CREATE INDEX ON products (upper(sku));
. - Audit Row-Level Security: If you use RLS, ensure the functions within your policies are hyper-efficient. An RLS policy is run for every row accessed, so its performance is critical.
- Sensible Configuration: Don't blindly copy-paste configurations. Understand what parameters like
work_mem
andshared_buffers
do. Use tools like PGTune to get a reasonable starting point for your hardware. - Understand Function Volatility: Use the correct volatility for your functions. Mark functions as
STABLE
orIMMUTABLE
if they are. This gives the query planner much more freedom to optimize.
Conclusion: Performance is a Feature, Not an Accident
Our journey to a 42,000x slowdown demonstrates a crucial truth: PostgreSQL is incredibly powerful, but it's not magic. It relies on developers and DBAs to provide the right signals—indexes, clean queries, and sane configuration—to do its job effectively.
The anti-patterns we explored are not obscure, academic problems. They are variations of real-world mistakes that can creep into any application. By understanding how performance can be destroyed, we are better equipped to build and maintain systems that are not just functional, but truly fast and scalable.