Optimizing what you already have

Or why you should think about cost as a performance metric

PostgreSQL deployments that once ran efficiently often begin to drift over time — not because something broke, but because usage patterns changed. A slow slide toward inefficiency is one of the most common drivers of cost increases.

The symptoms are subtle: queries slow down, backups take longer, CPU usage inches upward. The response is often to scale infrastructure: more vCPUs, higher IOPS, faster disks. But this approach treats symptoms, not root causes.

Observability tools such as Percona Monitoring and Management (PMM) surface patterns you might miss: underused CPU and memory, bloated indexes, and inefficient query plans that increase resource consumption over time. A server running at 25–30% CPU for weeks is quietly wasting money every hour.

Optimization isn’t just about right-sizing — it’s about behavior. One customer had a large analytics workload backed by hundreds of indexes — many of them unused or duplicative. After a targeted review, we eliminated over a third of those indexes, reduced bloat, and saw measurable improvements in query performance and disk usage. In another case, we helped an engineering team rewrite application logic to batch database access. This reduced CPU load by over 40%, with no infrastructure upgrade required.

By thinking about cost as a performance metric — just like latency or uptime — organizations start to build smarter, more efficient systems that avoid the need to scale prematurely.

The real cost of availability and operational risk

Continue reading