Configure and Tune EPAS and PostgreSQL Database Servers on Linux

August 01, 2024
Optimize your database servers with these configuration and tuning recommendations for EDB Postgres Advanced Server (EPAS) and PostgreSQL on Linux.

This tuning guide for EDB Postgres Advanced Server (EPAS) and PostgreSQL is based on the original version (2020) by Dave Page, Devrim Günduz, Shaun Thomas, Stacy Scoggins, Vibhor Kumar, and Vik Fearing.

The updated version (2024) is by Laetitia Avrot and Vibhor Kumar.

These recommendations are a starting point; benchmarks and other measurements are also required. This is not an exhaustive list of configuration settings but a list of the most critical parameter settings that are not already the default.

Engage with EDB’s Professional Services team or a qualified EDB partner for optimal tuning.

Designing a Robust PostgreSQL Server

The following focuses on bare metal and virtualized machines (VMs). Future versions may include cloud and containerized designs.

Bare metal

Several factors must be considered when designing a bare metal server for PostgreSQL. These include CPU, RAM, disk, and in a few cases, the network card.

CPU

Choosing the right CPU may be crucial in PostgreSQL performance. CPU speed will be essential when dealing with larger data, and CPUs with larger L3 caches will boost performance. For OLTP performance, having more and faster cores will help the operating system and PostgreSQL be more efficient.

CPUs have at least two caches: L1 (primary cache) and L2 (secondary cache). L1 is the smallest and fastest, embedded into the CPU core. L2 is slower than L1 but also larger. L2 is used to feed L1.

Unlike L1 and L2 which are unique to each core, L3 is shared across all cores. L3 is slower than L1 and L2 but still faster than RAM. A larger L3 cache will boost CPU performance while dealing with a larger data set. This will also benefit PostgreSQL for parallel queries.

RAM

RAM is the cheapest hardware and better for PostgreSQL performance. Operating systems use available memory and cache as much data as possible. More caching will result in less disk I/O and faster query times.

When buying new hardware, add as much RAM as possible. Adding more RAM in the future will be more expensive financially and technically (it will require downtime unless you have hotswap RAM). Multiple PostgreSQL parameters will be changed based on available memory, as mentioned below.

Disk

If the application is I/O-bound (read and/or write-intensive), choosing a faster drive set will improve performance significantly. There are multiple solutions available, including NMVe and SSD drives.

First, separate the WAL disk from the data disk. WAL may be a bottleneck in write-intensive databases, so keep it in a separate fast drive instead. Always use at least RAID 1, though you may need RAID 10 if the database writes a lot.

Using separate tablespaces and drives for indexes and data will also increase performance especially if PostgreSQL runs on SATA drives. This is usually optional for SSD and NVMe drives. Use RAID 10 for data.

Network card

Although network cards seem irrelevant to PostgreSQL performance, when the data grows significantly, faster or bonded network cards will speed up base backups.

Virtual machines

VMs have a slight performance deficit compared to bare metal servers due to their virtualization layer. The available CPU and disk I/O will also decrease due to shared resources.

To get better PostgreSQL performance in VMs, pin the VM to specific CPUs and disks. That will eliminate or limit the performance bottleneck from other VMs running on the host machine.

Pre-allocate disks before installation to prevent the host from allocating disk space during database operations. If you can’t, you can change these two parameters in postgresql.conf:

Disable the wal_recycle parameter in postgresql.conf. By default, PostgreSQL recycles WAL files by renaming them. However, on Copy-On-Write (COW) filesystems, creating new WAL files may be faster so disabling this parameter will help.

Disable the wal_init_zero parameter in postgresql.conf. By default, WAL space is allocated before WAL records are inserted. This slows down WAL operations on COW filesystems. Disabling this parameter will aid VM performance. When set to "off", only the final byte is written at file creation, ensuring it has the expected size. This is only effective if disk space is pre-allocated.

Operating System Tips for EPAS and PostgreSQL Server

Mount points

Have separate mount points with dedicated IOPs for the EPAS/PG cluster. For better performance, use SSDs.

1. /pgdata/16: Mount point for data directory of EPAS/PG

PGDATA directory: /pgdata/16/data

2. /pgwaldata/16: Mount point for WAL (Write Ahead Log).

WAL directory for EPAS/PG: /pgwaldata/16/wal

Depending on the usage of indexes and amount of indexes needed for user-defined tables, have a separate mount point for indexes per workload. Keep the PostgreSQL log directory, which is in PGDATA/log by default, on a separate mount point with correct permissions to make it available for other processes or users.

Filesystem

Use the XFS filesystem for PG/EPAS data directory, WAL, and other mount points hosting PG/EPAS data.

atime

PostgreSQL does not rely on atime (the timestamp when the file was last accessed) for data files, so disabling them saves CPU cycles.

Open /etc/fstab and add noatime near the defaults value for the drive where PostgreSQL data and WAL files are kept.

/dev/mapper/pgdata-01-data /pgdata xfs defaults,noatime 1 1

To activate it immediately, run:

mount -o remount,noatime,nodiratime /pgdata

These suggestions are suitable for a start. Monitor the operating system and PostgreSQL to gather more data for finer tuning.

Read-ahead

Increasing disk read-ahead improves I/O throughput by reducing the number of requests to disk. This is typically set at 128 kB but set it instead to 4,096 kB on the disk hosting the database or tablespaces.

As written below, you can tune the read-ahead parameter with the tuned daemon.

I/O scheduler

Use the deadline I/O scheduler for CentOS/RHEL 7 and mq-deadline for Rocky/RHEL 8 for spindles. For systems with solid state storage or a dedicated IO controller with its own reordering mechanism, use noop for RHEL 7 and none for RHEL 8.

Dirty memory

Current Linux systems define dirty memory as a ratio to total RAM size. A Linux system will enter sequential access mode once more than this amount is used and modified without being committed. It will block all I/O requests unrelated to flushing the dirty memory to disk. Such events are highly disruptive. Due to database write access patterns, this becomes more likely as volume increases.

In modern hardware and some VMs, even 1% of system RAM may be more data than a backing storage system can absorb without significant I/O waits. This is why modern Linux kernels can set dirty memory in bytes instead of a ratio. The recommended setting should be around the cache size of the underlying storage device or controller. If those values are unknown, 1GB is suitable.

Current Linux systems also tend to write dirty memory to disk in the background only after it has crossed a threshold dictated by vm.dirty_background_ratio. The default for this setting varies but is commonly 5-10% of RAM and no lower than 1%.

Modern kernels can also set the background write threshold in bytes with vm.dirty_background_bytes. Here it's common to use a value 1/4 the size of the dirty bytes allocation. This allows the kernel to trickle data to the disk subsystem long before it reaches the value we specify in vm.dirty_bytes. This also prevents excessive background writing caused by using a value too small.

Most storage subsystem caches aren’t exceedingly large at 1 to 4GB, so encouraging background writes while avoiding storage activity overhead is a delicate balancing act. Setting dirty_background_bytes to 1/4 of dirty_bytes can be adjusted later depending on the environment. When larger caches are available, dirty_background_bytes may be a smaller proportion of dirty_bytes.

Other operating system parameters

Below are EDB’s suggestions for tuning a Linux OS for EPAS/PostgreSQL using the above guidelines:

mkdir /etc/tuned/edb

export DIRTY_BYTES=$((1024*1024*1024))
export DIRTY_BG=$((${DIRTY_BYTES}/4))
export MEM_TOTAL=$(grep MemTotal /proc/meminfo | awk '{print $2}')

cat</etc/tuned/edb/tuned.conf
[main]
summary=Tuned profiles for EnterpriseDB Postgres Advanced Server
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[disk]
readahead=4096
[sysctl]
vm.overcommit_memory=2
vm.overcommit_kbytes=${MEM_TOTAL}
vm.swappiness=1
vm.dirty_bytes=${DIRTY_BYTES}
vm.dirty_background_bytes=${DIRTY_BG}
[vm]
transparent_hugepages=never
EOF

systemctl enable --now
tuned-adm profile edb

 

Initializing the EPAS/PG cluster

 

Begin with this initdb command to bootstrap the EPAS PGDATA folder:

PGSETUP_INITDB_OPTIONS="-X /pgwaldata/16/wal --pgdata=/pgdata/16/data"
/usr/edb/as16/bin/edb-as16-setup initdb

 

Use systemctl to extend the service file:

 

systemctl edit edb-as-16

 

Then add the following file contents in the edit window:

 

[Service]
Environment=PGDATA=/pgdata/16/data

 

Save and exit, then enable and start edb-as service:

 

systemctl enable --now edb-as-16

 

An alternative best practice is to create a symbolic link for PGDATA in the default data directory location of the PostgreSQL installation directory without modifying the systemd service file:

 

sudo -u enterprisedb rmdir /var/lib/edb/as16/data
sudo -u enterprisedb ln -s /pgdata/16/data /var/lib/edb/as16/data
systemctl enable --now edb-as-16

Configuring and Authenticating Database Servers

max_connections

The optimal number for max_connections is four times the number of CPU cores. This is often a tiny number which isn't practical in most real world cases. Instead, use the formula GREATEST(4 x CPU cores, 100). Beyond this number, use a connection pooler such as pgbouncer.

If you don't need that many connections, reduce the value of this parameter.

password_encryption

Use scram-sha-256 for this parameter.

Resource Usage for PostgreSQL on Linux

shared_buffers

This parameter has the most variance. Some workloads work best with tiny values like 1 or 2GB even with huge database volumes. Other workloads require large values. A reasonable starting point is Total RAM / 4, with a more nuanced version that accounts for more RAM variance in the following pseudo-code:

base = RAM / 4

if RAM < 3 GB:
    base = base * 0.5
else if RAM < 8 GB:
    base = base * 0.75
else if RAM > 64 GB:
    base = greatest(16 GB, RAM / 6)

shared_buffers = least(base, 64 GB)

This accounts for lower memory systems having less tolerance for high amounts of memory dedicated to Postgres shared buffers while still handling user sessions. Servers with much higher amounts of RAM can absorb proportionally adjusted shared buffer settings. However, beyond 64GB, there are diminishing returns due to overhead from maintaining a large contiguous memory allocation.

work_mem

The recommended starting point for work_mem is ((Total RAM - shared_buffers)/(16 x CPU cores))

maintenance_work_mem

This determines the maximum memory used for maintenance operations like VACUUM, CREATE INDEX, ALTER TABLE, ADD FOREIGN KEY, and data-loading operations. These may increase the I/O on database servers performing such activities, so allocating more memory may lead these operations to finish faster. The calculated value of 15% x (Total RAM - shared_buffers) / autovacuum_max_workers up to 1GB is a good start

effective_io_concurrency

This parameter is used for read-ahead on specific operations and should be set to the number of disks storing the data. However, improvements have been seen when using a multiple of that number. For solid-state disks, set this value to 200.

Parameters for Postgres Server Write-Ahead Logs

wal_compression

When this parameter is on, the PostgreSQL server compresses a full-page image written to WAL when full_page_writes is on or during a base backup. Set this parameter to " on ".

wal_log_hints

This parameter is required to use pg_rewind. Set it to " on ".
If the Postgres instance has checksums enabled, this setting is implied as always being activated.

wal_buffers

This controls the amount of memory available for backends to place WAL data prior to sync. WAL segments are 16MB each by default, so buffering a segment is inexpensive memory-wise. Larger buffer sizes have a potentially good effect on performance. Set this parameter to 64MB.

checkpoint_timeout

Longer timeouts reduce WAL volume but make crash recovery slower. The recommended value is a minimum of 15 minutes but ultimately, the RPO of business requirements dictates what this should be.

checkpoint_completion_target

This determines the amount of time in which PostgreSQL aims to complete a checkpoint. This means a checkpoint does not need to result in an I/O spike; instead, it aims to spread the writes over a period of time. The recommended value is 0.9. For PostgreSQL versions greater than 13, the default value is 0.9.

max_wal_size

This parameter can be difficult to set as its optimal value is a function of checkpoint_timeout and the maximum WAL throughput of the system. However, if set too small, it can dramatically reduce performance, so tune it. The risk of setting it too high is that you run out of disk space, but this otherwise won’t hurt performance. Note that max_wal_size is a soft limit.

If you have lots of disk space available for WAL (hundreds of gigabytes or more), set it high. On a very high-performance system, 200GB or higher is reasonable. If your disk space is constrained, set max_wal_size as high as possible. That will leave a little headroom and prevent running out of space. If your WAL is on a dedicated disk partition (always recommended), this may be 50-75% of the partition size.

To more precisely tune max_wal_size, monitor the checkpoints_timed and checkpoints_req values in the pg_stat_bgwriter view. If max_wal_size is too small, the ratio of requested checkpoints to timed checkpoints will rise, indicating that checkpoints are occurring because there is insufficient space to hold the WAL segments required to reach the next timed checkpoint.

Having some requested checkpoints is fine due to occasional activity spikes, but this should not be the norm. Increase max_wal_size as needed to minimize the number of requested checkpoints without exhausting available disk space.

archive_mode

Because changing this requires a restart, it should be set to " on ".

archive_command

A valid archive_command is required if archive_mode is on. Until archiving is ready to be configured, use the default “: to be configured ” on a POSIX system.

Tuning PostgreSQL Server Queries

random_page_cost

If using SSD disks, the recommended value is 1.1.

effective_cache_size

This should be the sum of shared_buffers and the Linux buffer cache (as seen in the buff/cache column of the free command).

cpu_tuple_cost

This specifies the relative cost of processing each row during a query. It is currently set to 0.01 but is likely lower than optimal and should be increased to 0.03 for more realistic costing.

Reporting and Logging Postgres Server Issues

logging_collector

This parameter should be “ on ” if log_destination includes stderr or csvlog.

log_directory

If the logging_collector is “on”, set this someplace outside the data directory. This way, the logs are not part of base backups.

log_checkpoints

This should be set to “ on ”.

log_min_duration_statement

Identify poorly performing queries early, so set this parameter to 1s to log any queries that run for one second or longer. This is only a starting point. If this is too long, put a higher value and fix the slower queries before lowering this value. Only try to fix slow queries if they are seen as slow by the developers or end users.

A query running for more than 250ms for a transactional workload is slow. Even a 5s running query might not be considered slow for other workloads, such as business analyses.

log_line_prefix

The prefix should at least contain the time, the process ID, the line number, the user and database, and the application name.

Suggested value: '%m [%p]: u=[%u] db=[%d] app=[%a] c=[%h] s=[%c:%l] tx=[%v:%x]'

Note: Don’t forget the space at the end

log_lock_waits

Set to “ on ”. This parameter is essential in diagnosing slow queries.

log_statement

Set to “ ddl ”. Besides leaving a basic audit trail, this will help determine when a catastrophic human error occurred, such as dropping the wrong table.

log_temp_files

Set to “ 0 ”. This will log all temporary files created, suggesting that work_mem is incorrectly tuned.

log_connections

Set to “ on ”. This will log all connections. It's useful for security purposes and determining if a connection pooler is needed or correctly set.

log_disconnections

Set to “ on ”. This in turn will log all disconnections.

timed_statistics (EPAS)

This controls timing data collection for the Dynamic Runtime Instrumentation Tools Architecture (DRITA) feature. Set this parameter to “on” to begin collecting.

Monitoring Autovacuum Processes for PostgreSQL on Linux

log_autovacuum_min_duration

Monitoring autovacuum activity will help tune it. The suggested value is 0

autovacuum_max_workers

This is the number of workers that autovacuum has. The default value is 3 and requires a DB restart to be updated. Each table can have only one worker, so increasing workers only helps in parallel and more frequent vacuuming across tables. The default value is low, so increase this to 5.

autovacuum_vacuum_cost_limit

To prevent excessive load on the DB due to the autovacuum, Postgres imposes an I/O quota. Every read/write causes depletion of this quota and once exhausted, the autovacuum sleeps for a fixed time. This configuration increases the quota limit, increasing the amount of I/O the vacuum can do. The default value is low, so increase this value to 5,000.

Client Connection Defaults for EPAS and PostgreSQL Server

idle_in_transaction_session_timeout

Sessions that remain idle in a transaction can hold locks and prevent a vacuum. The suggested value is 10 minute

lc_messages

Log analyzers only understand untranslated messages. Set this to “ C ”.

shared_preload_libraries

Adding pg_stat_statements has low overhead and high value. This is recommended but optional.

Share this
Is PostgreSQL available for Linux? chevron_right

Yes. Linux is one of the leading platforms on which PostgreSQL is developed and tested. Various Linux distributions can use PostgreSQL, including Ubuntu, CentOS, Fedora, and Debian.

Why use Linux for PostgreSQL? chevron_right

Many organizations and developers prefer Linux as the operating system for PostgreSQL because:

  • Linux performs particularly well in server environments. PostgreSQL can harness Linux to deliver high throughput and low latency.
  • Linux is known to be stable and reliable particularly in server deployments. This is key since avoiding interruptions or system crashes is critical when managing databases.
  • Linux has excellent security features: user permissions, access controls, security patches, and file system encryption.
  • Linux and PostgreSQL share an open source philosophy, championing community contributions and transparent processes.
  • Linux is free to use and share, cutting down on costs.
  • Linux is highly flexible, open to configuring network settings, optimizing system resources, and other customizations for peak PostgreSQL performance.
  • Linux is cloud-compatible, making it easy to use PostgreSQL without needing physical infrastructure.
Which Linux type is best for PostgreSQL? chevron_right

It depends on your needs and preferences.

  • Ubuntu: Ubuntu is widely used, with a large community and thorough documentation.
  • Red Hat Enterprise Linux (RHEL) and CentOS: RHEL and its community-driven equivalent CentOS are known for stability and reliability, perfect if you need long-term support and security updates.
  • Debian: If stability and predictability are what you need, Debian’s robust package management system is a good choice.
  • Fedora: If you are geared towards the latest features and updates, Fedora is the more innovative choice.
  • openSUSE: openSUS is known for its flexibility and usability and can work in server and desktop environments.
  • Arch Linux: Arch Linux is also for those who need the latest features and updates and has a rolling release model with the latest software versions.
Where is Postgres installed on Linux? chevron_right

You can find PostgreSQL configuration files in /etc/postgresql/<version>/main directory. For instance, if you install PostgreSQL 14, its configuration files can be found in /etc/postgresql/14/main directory.
 

How do I access Postgres on Linux? chevron_right
  1. Open a terminal.
  2. Use the psql command followed by the necessary connection parameters. The basic syntax is: psql -U username -d database_name -h hostname
  3. Enter your password if required.
  4. Once you see a prompt like mydatabase=#, you may start executing SQL commands.
What is the difference between MySQL and PostgreSQL on Linux? chevron_right

MySQL focuses more on simplicity and high performance while PostgreSQL is better for complex queries, transactional integrity, and SQL compliance.

MySQL supports slightly simpler data types such as integer, varchar, and text, while PostgreSQL supports a broader and more advanced range, including arrays, JSON, XML, and geometric types. MySQL is only partially SQL compliant whereas PostgreSQL holds fast to SQL standards

Where is Postgres data stored in Linux? chevron_right

The Postgres data directory, where all the files for databases, tables, and indexes are stored, can usually be found in /var/lib/postgresql/{version}/main/. However, the location may vary based on how Postgres was installed and configured.

How do I see users in Postgres Linux? chevron_right
  1. Open a terminal.
  2. Connect to PostgreSQL with the psql command. Specifying the PostgreSQL user and database may be necessary.
  3. See the user list by querying the pg_roles table. You can also directly query the pg_roles table using SELECT rolname FROM pg_roles.
How do I check the Postgres version on Linux? chevron_right

There are three methods you can use:

  • The first uses the psql command-line tool. Once connected to psql, you can find the Postgres version by querying SELECT version();.
  • The second uses Postgres executable, where you don’t need to enter the psql interactive shell. Instead, open a terminal and enter sudo -u postgres postgres --version.
  • The third uses the package management system. If Postgres was installed through a package manager like yum on CentOS or apt on Ubuntu, you can find the version using package management commands. For Ubuntu or Debian, use dpkg -l | grep postgresql. For CentOS or Red Hat, use yum list installed | grep postgresql.
How do I check if Postgres server is running on Linux? chevron_right

There are five methods you can use:

  • If your Linux uses systemd, you can use the command systemctl status postgresql.
  • You can use pg_isready if you specifically want to check the server’s status.
  • To filter for PostgreSQL processes, you can use ps aux | grep postgres.
  • You can also check the server’s status via pg_ctl status -D /var/lib/postgresql/{version}/main. Replace {version} with your PostgreSQL version number.
  • Find out if PostgreSQL listens on its default port using either sudo netstat -plnt | grep postgres or sudo ss -ltn | grep postgres.
How do I shut down a Postgres database in Linux? chevron_right

There are four methods you can use:

  • If PostgreSQL was installed through a package manager like apt or yum, you can use sudo systemctl stop postgresql. This will stop PostgreSQL on all the clusters running in your system.
  • If multiple clusters are running, use sudo pg_ctlcluster {version} {cluster_name} stop. Replace {version} with your PostgreSQL version and {cluster_name} with your cluster name.
  • If you’re using older SysV-init systems, use sudo service postgresql stop.
  • You can also use sudo -u postgres pg_ctl -D /var/lib/postgresql/{version}/main stop. Replace {version} with your PostgreSQL version (e.g., 13) and adjust the data directory path (/var/lib/postgresql/{version}/main) if it’s different on your system.

More Blogs