Tales from the Trenches – Postgres Support Calls

July 22, 2014

Postgres users consistently experience the same kinds of challenges at different stages of their database deployment cycle. There are relatively few issues to address during proof-of-concept and development, for example. The vast majority of questions and complications arise during the deployment and maintenance phases. And there are specific reasons—Postgres is an easy database to get ready for production, and to write code for.

However, in deployment and maintenance, Postgres behaves a bit differently from SQL server, Oracle, DB2 and other databases. Development may have been really easy, but deployment and maintenance are where the real DBA skills come into play. A study of our support calls – what our users need and at what point do they raise their hand – tells us when users most encounter challenges.

Tuning, for example, is as much art as it is skill. Doing this early in the lifecycle can save a lot of time, hassle and user friction after the database is deployed. Tuning questions comprise over 20 percent of the support calls that come in from users in the maintenance phase of their Postgres implementation. It also stands to reason that data corruption also makes its appearance during the maintenance phase.

And once the problem reaches support call stages, it can be a thorny issue to smooth out. Preventative maintenance for Postgres is comparable to getting scheduled service for you car – doing it avoids a visit to the breakdown lane.

Checking Up on Database Health
How to avoid having to make a support call is revealed in an examination of a series of EDB’s Architectural Health Checks, a relatively short project that examines database performance, scalability, availability, maintenance and tuning, and upgrades. The diagnostic results of nearly two-dozen Architectural Health Checks performed in 2013 produced some predictors of later problems. Consider the following:

Backup and Standby
The only thing worse than a poor-performing database is a database that’s lost its data. That’s why it’s crucial to understand your Point-In-Time Recovery (PITR) capabilities, and how long it will take you to get back to any point in recent time. A whopping 43% of customers had no PITR capabilities configured and some had no backup plans whatsoever. But setting up proper protections can be complicated and time-consuming, so it becomes one of those things DBAs plan to set up “tomorrow” and that day keeps getting away.

Storage
All too often, there is a disconnect between DBAs and storage administrators. DBAs are often not well-versed about the underlying storage architecture, whether its Direct Attached Storage, Storage Area Network, Network File System, etc. I/O behavior is dependent on the storage, so only knowing half of the equation often equals trouble later on.

Bloat
Indexes often become bloated, taking up excessive space. Multiple updates and deletes on tables can result in index fragmentation, leaving many empty or near-empty pages. This can cause unnecessary swapping and degrade performance. Re-indexing will rewrite the index without dead pages. Table bloat occurs if dead rows are not frequently removed. Vacuum is a tool that easily removes bloat, but needs to be configured correctly.

Parameter Adjustments
There are some key parameter adjustments that you need to know about, including shared buffers, cache size, kernels, dirty background ratios, and more. These parameters control various aspects of the database server’s behavior and environment and must be calibrated correctly to ensure optimal performance.

In working with our end users, we’ve not only identified the points at which users need to proactively hone their skills or gain new ones, but also have developed best practices and a series of support bundles with training and professional consulting to help. Find more information on our Postgres Deployment Services here or contact us.

Debra Brucato is Director, Services Product Management at EnterpriseDB.

 

Share this