#postgresql
#performance
#backend
#database
How We Fixed PostgreSQL Performance at 75,000+ Transactions
Real fixes we applied when our PostgreSQL database started slowing down under heavy transactional load
• 5 min read min read
The Problem
Our internal platform was processing 75,000+ transactions and queries were taking 10-15 seconds. The app was becoming unusable.
What We Found
Running EXPLAIN ANALYZE on slow queries revealed three main issues:
- Sequential scans on large tables with no indexes
- Read and write queries competing on the same connection pool
- Knex transactions left open due to missing
trx.commit()/trx.rollback()calls
Fix 1: Indexing
-- Before: sequential scan on 75k rows
SELECT * FROM transactions WHERE user_id = $1 AND status = $2;
-- Add composite index
CREATE INDEX idx_transactions_user_status
ON transactions(user_id, status);
Fix 2: Separate Read and Write Connections
// Write connection pool
const writePool = knex({
client: 'pg',
connection: process.env.DATABASE_WRITE_URL,
pool: { min: 2, max: 10 }
});
// Read connection pool
const readPool = knex({
client: 'pg',
connection: process.env.DATABASE_READ_URL,
pool: { min: 2, max: 20 }
});
Fix 3: Always Close Knex Transactions
// Wrong — transaction left open on error
const trx = await knex.transaction();
const result = await someQuery(trx);
await trx.commit();
// Correct — always handle errors
const trx = await knex.transaction();
try {
const result = await someQuery(trx);
await trx.commit();
} catch (err) {
await trx.rollback();
throw err;
}
Results
Query times dropped from 10-15 seconds to under 200ms.