Configuration Expert Recommendations v8

RuleCheck shared_buffers
RecommendationConsider adjusting shared_buffers
Triggershared_buffers < (OS == Windows ? 64MB : MIN(0.20 * (system_memory - 256MB), 6GB)) or shared_buffers > (OS == Windows ? 512MB : MAX(0.35 * system_memory, 8GB))
Recommended Valuesystem_memory < 1GB ? MAX((system_memory - 256MB) / (OS == Windows ? 6 : 3), 64MB), OS == Windows ? MAX(system_memory / 8, 256MB) : MAX(system_memory / 4, 8GB)
SeverityMedium

Description: The configuration variable shared_buffers controls the amount of memory reserved by PostgreSQL for its internal buffer cache. Setting this value too low may result in "thrashing" the buffer cache, resulting in excessive disk activity and degraded performance. However, setting it too high may also cause performance problems. PostgreSQL relies on operating system caching to a significant degree , and setting this value too high may result in excessive "double buffering" that can degrade performance. It also increases the internal costs of managing the buffer pool. On UNIX-like systems, a good starting value is approximately 25% of system memory, but not more than 8GB. On Windows systems, values between 64MB and 512MB typically perform best. The optimal value is workload-dependent, so it may be worthwhile to try several different values and benchmark your system to determine which one delivers best performance.

Note: PostgreSQL will fail to start if the necessary amount of shared_memory cannot be located. This is usually due to an operating system limitation which can be raised by changing a system configuration setting, often called shmall.See the documentation for more details. You must set this limit to a value somewhat higher than the amount of memory required for shared_buffers,because PostgreSQL's shared memory allocation also includes amounts required for other purposes.

RuleCheck work_mem
RecommendationConsider adjusting work_mem
Triggergiven spare_mem = system_memory - (OS == Windows ? 256MB : MAX(0.25 * system_memory, 8GB)) then work_mem < MAX(1MB, spare_mem / 512) or work_mem > (spare_mem / 128)
Recommended Valuegiven spare_mem defined as on the previous line then MAX (1MB, spare_mem / 256)
SeverityMedium

Description: The configuration variable work_mem controls the amount of memory PostgreSQL will use for each individual hash or sort operation. When a sort would use more than this amount of memory, the planner will arrange to perform an external sort using disk files. While this algorithm is memory efficient, it is much slower than an in-memory quick sort. Similarly, when a hash join would use more than this amount of memory, the planner will arrange to perform it in multiple batches, which saves memory but is likewise much slower. In either case, the planner may in the alternative choose some other plan that does not require the sort or hash operation, but this too is often less efficient. Therefore, for good performance it is important to set this parameter high enough to allow the planner to choose good plans. However, each concurrently executing query can potentially involve several sorts or hashes, and the number of queries on the system can vary greatly Therefore, a value for this setting that works well when the system is lightly loaded may result in swapping when the system becomes more heavily loaded. Swapping has very negative effects on database performance and should be avoided, so it is usually wise to set this value somewhat conservatively.

Note: work_mem can be adjusted for particular databases, users, or user-and -database combinations by using the commands ALTER ROLE and ALTER DATABASE It can also be changed for a single session using the SET command. This can be helpful when particular queries can be shown to run much faster with a value of work_mem that is too high to be applied to the system as a whole.

RuleCheck max_connections
RecommendationConsider using a connection pooler
Triggermax_connections > 100
SeverityMedium

Description: The configuration variable max_connection is set to a value greater than 100. PostgreSQL performs best when the number of simultaneous connections is low. Peak throughput is typically achieved when the connection count is limited to is limited to approximately twice the number of system CPU cores plus the number of spindles available for disk I/O (in the case of an SSD or other non-rotating media, some experimentation may be needed to determine the "effective spindle count"). Installing a connection pooler, such as pgpool-II or pgbouncer, can allow many clients to be multiplexed onto a smaller number of server connections ,sometimes resulting in dramatic performance gains.

RuleCheck maintenance_work_mem
RecommendationConsider adjusting maintenance_work_mem
Triggerspare_mem = system_memory - (OS == Windows ? 256MB : MAX(0.25 * system_memory, 8GB)) then maintenance_work_mem < MAX(16MB, spare_mem / 32) or maintenance_work_mem > MIN(spare_mem / 8, 256MB)
Recommended Valuespare_mem as defined on the previous line then MIN(spare_mem/16, 256MB)
SeverityLow

Description: The configuration variable maintenance_work_mem controls the amount of memory PostgreSQL will use for maintenance operations such as CREATE INDEX and VACUUM. Increasing this setting from the default of 16MB to 256MB can make these operations run much faster. Higher settings typically do not produce a significant further improvement. On PostgreSQL 8.3 and higher, multiple autovacuum processes may be running at one time (up to autovacuum_max_workers, which defaults to 3), and each such process will use the amount of dedicated memory dictated by this parameter. This should be kept in mind when setting this parameter, especially on systems with relatively modest amounts of physical memory, so as to avoid swapping. Swapping has very negative effects on database performance and should be avoided. If the value recommended above is less than 256MB, it is chosen with this consideration in mind. However, the optimal value is workload-dependent, so it may be worthwhile to experiment with higher or lower settings.

RuleCheck effective_io_concurrency
RecommendationConsider adjusting effective_io_concurrency
Triggereffective_io_concurrency < 2
SeverityLow

Description: If the PostgreSQL data files are located on a RAID array or SSD, effective_io_concurrency should be set to the approximate number of I/O requests that the system can service simultaneously. For RAID arrays, this is typically equal to the number of drives in the array. For SSDs, some experimentation may be needed to determine the most effective value. Setting this parameter to an appropriate value impoves the performance of bitmap index scans. The default value of 1 is appropriate for cases where all PostgreSQL data files are located on a single spinning medium.

RuleCheck fsync is enabled
RecommendationConsider configuring fsync = on.
Triggerfsync = off
SeverityHigh

Description: When fsync is set to off, a system crash can result in unrecoverable data loss or non-obvious corruption. fsync = off is an appropriate setting only if you are prepared to erase and recreate all of your databases in the event of a system crash or unexpected power outage.

Note: Much of the performance benefit obtained by configuring fsync = off can also be obtained by configuring synchronous_commit = off. However, the latter settings is far safer: in the event of a crash, the last few transactions committed might be lost if they have not yet made it to disk, but the database will not be corrupted.

RuleCheck wal_sync_method
RecommendationOn Windows, consider configuring wal_sync_method = fsync or wal_sync_method = fsync_writethrough.
TriggerOS = Windows and wal_sync_method not in ('fsync', 'fsync_writethrough')
SeverityHigh

Description: In order to guarantee reliable crash recovery, PostgreSQL must ensure that the operating system flushes the write-ahead log to disk when asked to do so. On Windows, this can be achieved by setting wal_sync_method to fsync or fsync_writethrough, or by disabling the disk cache on the drive where the write-ahead log is written. (It is safe to leave the disk cache enable if a battery-back disk cache is in use.)

Note: In cases where the loss of a very recently committed transaction is acceptable, the performance impact of flushing the write ahead log to disk can be mitigated by setting synchronous_commit = off. In other situations, the use of a battery-backed RAID controller is recommended.

RuleCheck wal_sync_method
RecommendationOn Mac OS X, consider configuring wal_sync_method = fsync_writethrough.
TriggerOS == MacOS X and wal_sync_method != fsync_writethrough
SeverityHigh

Description: In order to guarantee reliable crash recovery, PostgreSQL must ensure that the operating system flushes the write-ahead log to disk when asked to do so. On MacOS X, this can be achieved by setting wal_sync_method to fsync_writethrough or by disabling the disk cache on the drive where the write-ahead log is written. It is safe to leave the disk cache enable if a battery-back disk cache is in use.

Note: In cases where the loss of a very recently committed transaction is acceptable, the performance impact of flushing the write ahead log to disk can be mitigated by setting synchronous_commit = off. In other situations, the use of a battery-backed RAID controller is recommended.

RuleCheck wal_buffers
RecommendationConsider adjusting wal_buffers
Triggerwal_buffers < 1MB or wal_buffers > 16MB
SeverityMedium

Description: Increasing the configuration parameter wal_buffers from the default value of 64kB to 1MB or more can reduced the number of times the database must flush the write-ahead log, leading to improved performance under some workloads. There is no benefit to setting this parameter to a value greater than the size of a WAL segment (16MB).

RuleCheck commit_delay
RecommendationConsider setting commit_delay = 0.
Triggercommit_delay != 0
SeverityLow

Description: Setting the commit_delay configuration parameter to a non-zero value causes the system to wait for the specified number of microseconds before flushing the write-ahead log to disk at commit time, potentially allowing several concurrent transactions to commit with a single log flush. In most cases, this does not produce a performance benefit, and in some cases, it can produce a performance regression. Unless you have confirmed through benchmarking that a non-default value for this parameter produces a performance benefit, the default value of 0 is recommended.

RuleCheck checkpoint_segments
RecommendationConsider adjusting checkpoint_segments.
Triggercheckpoint_segments < 10 or checkpoint_segments > 300
SeverityMedium

Description: In order to ensure reliable and efficient crash recovery, PostgreSQL periodically writes all dirty buffers to disk. This process is called a checkpoint.Checkpoints occur when (1) the number of write-ahead log segments written since the last checkpoint exceeds checkpoint_segments, (2) the amount of time since the last checkpoint exceeds checkpoint_timeout, (3) the SQL command CHECKPOINT is issued, or (4) the system completes either shutdown or crash recovery. Increasing the value of checkpoint_segments will reduce the frequency of checkpoints and will therefore improve performance, especially during bulk loading. The main downside of increasing checkpoint_segments is that, in the event of a crash, recovery will require a longer period of time to return the database to a consistent state. In addition, increasing checkpoint_segments will increase disk space consumption during periods of heavy system activity. However, because the theoretical limit on the amount of additional disk space that will be consumed for this reason is less than 32MB per additional checkpoint segment, this is often a small price to pay for improved performance.

Values between 30 and 100 are often suitable for modern systems. However, on smaller systems, a value as low as 10 may be appropriate, and on larger systems, a value as 300 may be useful. Values outside this range are generally not worthwhile.

RuleCheck checkpoint_completion_target
RecommendationConsider adjusting checkpoint_completion_target.
Triggercheckpoint_completion_target != 0.9
SeverityMedium

Description: In order to ensure reliable and efficient crash recovery, PostgreSQL periodically writes all dirty buffers to disk. This process is called a checkpoint. Beginning in PostgreSQL 8.3, checkpoints take place over an extended period of time in order to avoid swamping the I/O system. checkpoint_completion_target controls the rate at which the checkpoint is performed, as a function of the time remaining before the next checkpoint is due to start. A value of 0 indicates that the checkpoint should be performed as quickly as possible, whereas a value of 1 indicates that the checkpoint should complete just as the next checkpoint is scheduled to start. It is usually beneficial to spread the checkpoint out as much as possible; however, if checkpoint_completion_target is set to a value greater than 0.9, unexpected delays near the end of the checkpoint process can cause the checkpoint to fail to complete before the next one needs to start. Because of this, the recommended setting is 0.9.

RuleCheck effective_cache_size
RecommendationConsider adjusting effective_cache_size.
Triggereffective_cache_size < 0.5 * system_memory or effective_cache_size > MAX(0.9 * system_memory, system_memory - 1GB)
Recommended value0.75 * system_memory
SeverityMedium

Description: When estimating the cost of a nested loop with an inner index-scan, PostgreSQL uses this parameter to estimate the chances that rows from the inner relation which are fetched multiple times will still be in cache when the second fetch occurs. Changing this parameter does not allocate any memory, but an excessively small value may discourage the planner from using indexes that would in fact speed up the query. The recommended value is 75% of system memory.

RuleCheck default_statistics_target
RecommendationConsider adjusting default_statistics_target.
Triggerdefault_statistics_target < 25 or default_statistics_target > 400
Recommended value100
SeverityMedium

Description: PostgreSQL uses statistics to generate good query plans. These statistics are gathered either by a manual ANALYZE command or by an automatic analyze launched by the autovacuum daemon, and they include the most common values in each column of each database table, the approximate distribution of the remaining values, the fraction of rows which are NULL, and several other pieces of statistical information.

default_statistics_target indicates the level of detail that should be used in gathering and recording these statistics. A value of 100, which is the default beginning in PostgreSQL 8.4, is reasonable for most workloads. For very simple queries, a smaller value may be useful, while for complex queries especially against large tables, a higher value may work better. In some case, it can be helpful to override the default statistics target for specific table columns using ALTER TABLE .. ALTER COLUMN .. SET STATISTICS.

RuleCheck planner methods is enabled
RecommendationAvoid disabling planner methods.
Triggerany enable* GUC is off
SeverityHigh

Description: The enable_bitmapscan, enable_hashagg, enable_hashjoin, enable_indexscan, enable_material, enable_mergejoin, enable_nestloop, enable_seqscan, enable_sort, and enable_tidscan parameters are intended primarily for debugging and should not be turned off. It can sometimes be helpful to disable one or more of these parameters for a particular query, when there is no other way to obtain the desired plan. However, none of these parameters should ever be turned off on a system-wide basis.

RuleCheck track_counts is enabled
RecommendationConsider configuring track_counts = on.
Triggertrack_counts = off
SeverityHigh

Description: Autovacuum will not function properly if track_counts is disabled. Regular vacuuming is crucial to system stability and performance.

RuleCheck autovacuum is enabled
RecommendationConsider configuring autovacuum = on.
Triggerautovacuum = off
SeverityHigh

Description: Enabling autovacuum is an important part of maintaining system stability and performance. Although disabling autovacuum may be useful during bulk loading, it should always be promptly reenabled when bulk loading is completed. Leaving autovacuum disabled for extended periods of time will result in table and index "bloat",where available free space is not reused, resulting in uncontrolled table and index growth. Reversing such bloat requires invasive maintenance using CLUSTER, REINDEX, and/or VACUUM FULL. Allowing autovacuum to work normally is usually sufficient to avoid the need for such maintenance.

RuleCheck configuring seq_page_cost
RecommendationConsider configuring seq_page_cost <= random_page_cost.
Triggerseq_page_cost > random_page_cost
SeverityMedium

Description: seq_page_cost and random_page_cost are parameters used by the query parameter to determine the optimal plan for each query. seq_page_cost represents the cost of a sequential page read, while random_page_cost represents the cost of a random page read. While these costs might be equal, if, for example, the database is fully cached in RAM, the sequential cost can never be higher. The PostgreSQL query planner will produce poor plans if seq_page_cost is set higher than random_page_cost.

RuleCheck reducing random_page_cost
RecommendationConsider reducing random_page_cost to no more than twice seq_page_cost.
Triggerrandom_page_cost > 2 * seq_page_cost
SeverityLow

Description: seq_page_cost and random_page_cost are parameters used by the query parameter to determine the optimal plan for each query. seq_page_cost represents the cost of a sequential page read, while random_page_cost represents the cost of a random page read. random_page_cost should always be greater than or equal to seq_page_cost, but it is rarely beneficial to set random_page_cost to a value more than twice seq_page_cost. However, the correct values for these variables are workload-dependent. If the database's working set is much larger than physical memory and the blocks needed to execute a query will rarely be in cache, setting random_page_cost to a value greater than twice seq_page_cost may maximize performance.

RuleCheck increasing seq_page_cost
RecommendationConsider increasing seq_page_cost.
Triggerseq_page_cost < cpu_tuple_cost, seq_page_cost < cpu_index_tuple_cost, or seq_page_cost < cpu_operator_cost
SeverityMedium

Description: The cost of reading a page into the buffer cache, even if it is already resident in the operating system buffer cache, is rarely less than the cost of a CPU operation. Thus, the value of the configuration parameter seq_page_cost should usually be greater than the values of the configuration parameters cpu_tuple_cost ,cpu_index_tuple_cost, and cpu_operator_cost.