Comprehensive guide on how to tune database parameters and configuration in PostgreSQL 

January 19, 2023

SUMMARY: This article discusses parameters that can be used to configure performance in PostgreSQL. After introducing ways of measuring database speed, it reviews the parameters and ways to optimize performance with them.

1. shared_buffer

2. wal_buffers

3. effective_cache_size

4. work_mem

5. synchronous_commit

6. max_connections

7. max_prepared_transactions

8. max_worker_processes

9. max_parallel_workers_per_gather

10. max_parallel_workers

11. effective_io_concurrency

12. random_page_cost

13. min_ and max_wal_size

14. max_fsm_pages


PostgreSQL is a very versatile database system, capable of running efficiently in low-resource environments and in environments shared with a variety of other applications. In order to ensure it will run properly for many different environments, the default configuration is very conservative and not terribly appropriate for a high-performance production database. Add the fact that geospatial databases have different usage patterns, and the data tend to consist of fewer, much larger records than non-geospatial databases, and you can see that the default configuration will not be totally appropriate for our purposes.

I think we can agree that everyone wants a fast database. The question is: fast in what respect?

In terms of databases, there are at least two different directions of “fast”:

1) The number of transactions per second

2) Throughput or amount of data processing

These are interrelated but definitely not the same. Both have completely different requirements in terms of I/O. In general, you want to avoid I/O at all costs. This is because I/O is always slow in comparison to access to data in memory, CPU caches of different levels, or even CPU registers. As a rule of thumb, every layer slows down access by about 1:1000.

For a system with high demand for a large number of transactions per second, you need as many concurrent IOs as you can get. For a system with high throughput, you need an IO subsystem that can deliver as many bytes per seconds as possible.

That leads to the requirement to have as much data as possible near to the CPU—e.g., in RAM. At least the working set, which is the set of data that is needed to give answers in an acceptable amount of time, should fit.

Each database engine has a specific memory layout and handles different memory areas for different purposes.

To recap: we need to avoid IO, and we need to size the memory layout so that the database is able to work efficiently (and, I assume, that all other tasks are done in terms of proper schema design). 

All of these configuration parameters can be edited in the postgresql.conf database configuration file. This is a regular text file and can be edited using Notepad or any other text editor. The changes will not take effect until the server is restarted.

Note: These values are recommendations only. Each environment will differ, and testing is required to determine the optimal configuration. But this section should get you off to a good start.


Database parameters

Here are some parameters that can be tuned to have optimal performance depending on your system and workload. 



The PostgreSQL buffer is called shared_buffer and is the most effective tunable parameter for most operating systems. This parameter sets how much dedicated memory will be used by PostgreSQL for the cache.

The default value of shared_buffer is set very low, and you will not get much benefit from it. It is set low because certain machines and operating systems do not support higher values. But in most modern machines, you need to increase this value for optimal performance.

If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system. You should try some lower and higher values, because in some cases to achieve optimal performance you need a setting over 25%. The useful range for shared_buffers on Windows systems is generally from 64MB to 512MB. The configuration depends on your machine, the working data set, and the workload on your machine.

In production environments, it is observed that a large value for shared_buffer gives a really good performance, though you should always benchmark to find the right balance.

Check shared_buffer value :

edb=# Show shared_buffers;






PostgreSQL writes its WAL (write-ahead log) record into the buffers, and then these buffers are flushed to disk. The default size of the buffer, defined by wal_buffers, is 16MB, but if you have a number of concurrent connections, then a higher value can give better performance.



The effective_cache_size parameter provides an estimate of the memory available for disk caching. It is just a guideline, not the exact allocated memory or cache size. It does not allocate actual memory but tells the optimizer the amount of cache available in the kernel. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, while a lower value makes it more likely sequential scans will be used. If the value is set too low the query planner can decide not to use some indexes, even if they’d be helpful. Account should also be taken of the expected number of concurrent queries on different tables, since they will have to share the available space. Therefore, setting a large value is always beneficial.

Let us take a look at some practical implications of effective_cache_size.


Set effective_cache_size = 1MB

edb=# SET effective_cache_size TO '1 MB';

edb=# explain SELECT * FROM bar ORDER BY id LIMIT 10;



Limit (cost=0.00..39.97 rows=10 width=4)

-> Index Only Scan using idx_in on bar (cost=0.00..9992553.14 rows=2500000 width=4)

(2 rows)


As you can see costs of this query are estimated at 39.97 penalty points.

What happens if we change effective_cache_size to an insanely high value?

SET effective_cache_size = 10000MB

edb=# SET effective_cache_size TO '10000  MB';

edb=# explain SELECT * FROM bar ORDER BY id LIMIT 10;



Limit (cost=0.00..0.44 rows=10 width=4)

-> Index Only Scan using idx_in on bar (cost=0.00..109180.31 rows=2500000 width=4)

(2 rows)


As you can see the costs drop dramatically. This makes sense, because we don’t expect the kernel to cache any data if we have only 1MB of RAM; however, we can expect the cache hit rate on the kernel side to up dramatically if we expect data to be cached by the OS. Random I/O is the most expensive thing, and changing this cost parameter has serious impacts on what the planner believes. Just imagine a more complex query—different cost estimates can lead to totally different plans.



This configuration is used for complex sorting. If you have complex sorting to do, then increase the value of work_mem for good results. In-memory sorts are much faster than sorts spilling to disk. Setting a very high value can cause a memory bottleneck for your deployment environment because this parameter is a per-user sort operation. Therefore, if you have many users trying to execute sort operations, then the system will allocate work_mem * total sort operations for all users. Setting this parameter globally can cause very high memory usage. So it is highly recommended to modify this at the session-level.


Set work_mem = 2MB

edb=# SET work_mem TO "2MB";


                                    QUERY PLAN                                     


Gather Merge  (cost=509181.84..1706542.14 rows=10000116 width=24)

   Workers Planned: 4

   ->  Sort  (cost=508181.79..514431.86 rows=2500029 width=24)

         Sort Key: b

         ->  Parallel Seq Scan on bar  (cost=0.00..88695.29 rows=2500029 width=24)

(5 rows)

Set work_mem = 256MB


The initial query’s sort node has an estimated cost of 514431.86. A cost is an arbitrary unit of computation. For the above query, we have a work_mem of only 2MB. For testing purposes, let’s increase this to 256MB and see if there is any impact on cost.

edb=# SET work_mem TO "256MB";


                                    QUERY PLAN                                    


Gather Merge  (cost=355367.34..1552727.64 rows=10000116 width=24)

   Workers Planned: 4

   ->  Sort  (cost=354367.29..360617.36 rows=2500029 width=24)

         Sort Key: b

         ->  Parallel Seq Scan on bar  (cost=0.00..88695.29 rows=2500029 width=24)


The query cost is reduced from 514431.86 to 360617.36 — a 30% reduction.



The maintenance_work_mem parameter is a memory setting used for maintenance tasks. The default value is 64MB. Setting a large value helps in tasks like VACUUM, RESTORE, CREATE INDEX, ADD FOREIGN KEY, and ALTER TABLE.


Set maintenance_work_mem = 10MB


edb=# SET maintenance_work_mem to '10MB';

edb=# CREATE INDEX foo_idx ON foo (c);


Time: 170091.371 ms (02:50.091)

Set maintenance_work_mem = 256MB


edb=# set maintenance_work_mem to '256MB';

edb=# CREATE INDEX foo_idx ON foo (c);


Time: 111274.903 ms (01:51.275)


The index creation time is 170091.371 ms when maintenance_work_mem is set to only 10MB, but that is reduced to 111274.903 ms when we increase maintenance_work_mem setting to 256MB.



This is used to enforce that commit will wait for WAL to be written on disk before returning a success status to the client. This is a trade-off between performance and reliability. If your application is designed so that performance is more important than reliability, you should turn off synchronous_commit. With synchronous_commit off, there will be a time gap between the success status and a guaranteed write to disk. In the case of a server crash, data might be lost, even though the client received a success message on commit. In this case, a transaction commits very quickly because it does not wait for a WAL file to be flushed, but reliability is compromised.



This parameter does just what you think: sets the maximum number of current connections. If you reach the limit, you will not be able to connect to the server anymore. Every connection occupies resources, so the number should not be set too high. If you have long-running sessions, you probably need to use a higher number than if the sessions are mostly short-lived. Keep aligned with the configuration for connection pooling.



When you use prepared transactions you should set this parameter at least equal to the number of max_connections, so that every connection can have at least one prepared transaction. You should consult the documentation for your preferred ORM (Object-Relational-Mapper) to see if there are special hints on this.



Set this to the number of CPUs you want to share for PostgreSQL exclusively. This is the number of background processes the database engine can use. Setting this parameter will require a server restart. The default is 8.

When running a standby server, you must set this parameter to the same value or higher than on the master server. Otherwise, queries will not be allowed on the standby server.



The maximum workers a Gather or GatherMerge node can use. This parameter should be set equal to max_worker_processes. When the Gather node is reached during query execution, the process that is implementing the user's session will request a number of background worker processes equal to the number of workers chosen by the planner. The number of background workers that the planner will consider using is limited to max_parallel_workers_per_gather or below. The total number of background workers that can exist at any one time is limited by both max_worker_processes and max_parallel_workers. Therefore, it is possible for a parallel query to run with fewer workers than planned, or even with no workers at all. The optimal plan may depend on the number of workers that are available, so this can result in poor query performance.

If this occurrence is frequent, consider increasing max_worker_processes and max_parallel_workers so that more workers can be run simultaneously or alternatively reducing max_parallel_workers_per_gather so that the planner requests fewer workers.



Maximum parallel worker processes for parallel queries. Same as for max_worker_processes. The default value is 8.

Note that a setting for this value that is higher than max_worker_processes will have no effect, since parallel workers are taken from the pool of worker processes established by that setting.



The number of real concurrent IO operations supported by the IO subsystem. As a starting point: with plain HDDs try setting at 2, with SSDs go for 200, and if you have a potent SAN you can start with 300.



This factor basically tells the PostgreSQL query planner how much more (or less) expensive it is to access a random page than to do sequential access. With SSDs or potent SANs, this does not seem so relevant, but it was in times of traditional hard disk drives. For SSDs or SANs, start with 1.1, for plain old disks set it to 4.


min_ and max_wal_size

These settings set size boundaries on the transaction log of PostgreSQL. Basically, this is the amount of data that can be written until a checkpoint is issued, which in turn syncs the in-memory data with the on-disk data.



This option helps to control the free space map. When something is deleted from a table it isn't removed from the disk immediately. It is simply marked as "free" in the free space map. This space can then be reused for any new INSERTs that you do on the table. If your setup has a high rate of DELETEs and INSERTs it may be necessary to increase this value to avoid table bloat.


Share this

Relevant Blogs

More Blogs