In January one of my colleagues, Kuntal Gosh, shared his benchmark with current and recent versions of PostgreSQL on an AWS m5.metal instance. In his benchmark, he has shown how over the few last years that PostgreSQL’s performance has improved for the pgbench-type workload.
Using Kuntal’s benchmark scripts, we decided to perform another benchmark for PostgreSQL 12.2 on a similar machine equipped with multiple SSDs. Kuntal’s work is described in this blog entry.
For the new benchmark, we made some changes, which we will be sharing later. After tweaking a few parameters and adding extra mount points for more IOPs, we were able to achieve 70,000 TPS for the pgbench read-write workload.
After performing the benchmark with PostgreSQL 12.2, we decided to run the same benchmark on the same machine and configuration for EDB Postgres Advanced Server 12.2. EDB Postgres Advanced Server was initiated in Redwood mode, which enables native compatibility with Oracle’s PL/SQL.
Following are the results:
The above two results show that EDB Postgres Advanced server provides nominally the same performance as PostgreSQL for the pgbench workload.Environment details, including hardware, OS parameters and Postgres configuration were identical for PostgreSQL and EDB Postgres Advanced Server. The details are listed below.
Environment Details: Machine Type, Operating System & Database Version
Machine Type and Mount Points
For this benchmark, we have used the pgbench utility, which comes with EDB Postgres Advanced Server binaries and used the same machine type as Kuntal’s blog.
Instance Details | ||||||
---|---|---|---|---|---|---|
Instance Name | Sustained All-Core Turbo | Logical Processors | Memory | Local Storage | EBS-Optimized Bandwidth | Network Bandwidth |
m5d.metal | Up to 3.1 GHz | 96 | 384 GiB | 4 x 900 GB NVMe SSD | 14 Gbps | 25 Gbps |
For better I/O throughput, we decided to create three mount points with more Provisioned IOPs as mentioned in the below table. The mounted disks were configured with a read-ahead block value 8192 and using the deadline scheduler. The following table gives information about mount points, size, provisioned IOPs, read-ahead setting, and I/O scheduler of the disk which we used for the benchmark.
Mount points | Size | Provisioned IOPS | Read-Ahead Setting | I/O Scheduler |
---|---|---|---|---|
/pg_indexes | 200 GiB | 10000 | 8192 | deadline |
/pg_data | 600 GiB | 30000 | 8192 | deadline |
/pg_wal | 200 GiB | 10000 | 8192 | deadline |
Operating System
For this benchmark, we used CentOS Linux release 7.8.2003 (Core) Operating system and tuned the following parameters. We disabled Transparent Huge Pages (THP) because as per our past experience we have not seen any benefits for THP with pgbench workload. In some cases, we observed that THP was impacting the performance of the database by 2-3%.
Database
We used the following initdb command with a custom WAL segment on dedicated mount points for WAL and indexes. We chose a custom WAL segment size, i.e.. 128 MB, because it reduces the number of WAL files in the WAL directory and, with it, Postgres performs better on write-heavy workloads.
${PGSUDO} ${INITDB} --pgdata=/pg_data/data \
--waldir=/pg_wal/wal \
--wal-segsize=128 \
--encoding=UTF-8
Database size and tablespaces used for this benchmark are given in the following table:
Benchmark Method & Process
Pgbench
As mentioned above, we used pgbench for this benchmark. Pgbench runs the same set of SQL commands multiple times and allows runs the transactions using multiple concurrent database connections. After running the transactions, pgbench gives the average transaction rate (transactions per second). For more information on pgbench, click here.
Database Size of 200GB
In order to make the benchmarking process smooth, we created a database size of 200GB using the following pgbench command:
${PGSUDO} ${PGBIN}/pgbench --initialize \
--scale=${F_SCALE_FACTOR} \
--index-tablespace=pgindexes \
--user=${PGUSER} \
--port=${PGPORT} \
${PGDATABASE}
where F_SCALE_FACTOR value was 15994.3800.
After creating a database of 200GB, we stopped Postgres and took a backup of the database using the following command:
${PGSUDO} rsync -ahW --no-compress ${PGDATA}/ ${PGDATA_BCKUP} &
${PGSUDO} rsync -ahW --no-compress ${PGINDEXES}/ ${PGINDEXES_BCKUP} &
${PGSUDO} rsync -ahW --no-compress ${PGWAL}/ ${PGWAL_BCKUP} &
We needed the backup for restarting the benchmark after each run. This helped us remove any unknown factor (like caching) impacting the benchmarking process.
We used the database parameters mentioned in Appendix - I and used pg_prewarm for warming the pgbench_accounts table’s data. The following are the SQL queries we used for warming the cache with the pgbench_accounts table:
SELECT pg_prewarm(‘pgbench_accounts’);
SELECT pg_prewarm(‘pgbench_accounts_pkey’);
After pre-warming the tables, we ran the pgbench command for a set of connections. We repeated the process three times for each set of connections and executed. The following is sample bash code from the set of scripts we executed for this benchmark:
for ((run = 1 ; run <= ${NO_OF_RUNS} ; run++))
do
# print the run number
plog "RUN => ${run}"
# create run the director
plog "Creating log dirtoring for run"
mkdir -p ${LOGDIR}/${run}
# start the database if its not running
plog "Starting Postgres if its not started yet"
start_stop_pg start
for threads in ${PGBENCH_CONNECTIONS_LIST}
do
RUN_LOGFILE=${LOGDIR}/${run}/${threads}_conn.log
plog "Running pgbench for ${threads} number of connections"
pgbench_steps "${threads}" "${threads}" "${RUN_LOGFILE}"
# backup the postgresql log file
sudo mv ${PGDATA}/log/postgresql.log ${LOGDIR}/${run}
sudo chown ${USER}:${USER} ${LOGDIR}/${run}/postgresql.log
restore_pg
update_pg_parameters
done
done
The definition of pgbench_steps mentioned in the above sample code is given below. In the definition, a user can see we are using PREPARED statements and ${F_THREADS} = ${F_CONNECTIONS}
function pgbench_steps()
{
typeset -r F_CONNECTIONS="$1"
typeset -r F_THREADS="$1"
typeset -r F_LOGFILE="$3"
plog "Starting Postgres if its not started yet"
start_stop_pg start
plog "Peform pg_prewarm"
cat ${LIBSQL}/pgbench_prewarm.sql \
| ${PGSUDO} ${PSQL} --user=${PGUSER} \
--port=${PGPORT} \
--host=${PGHOST} \
--dbname=${PGDATABASE}
plog "Perform checkpoint"
${PGSUDO} ${PSQL} --command="CHECKPOINT;" \
--username=${PGUSER} \
--port=${PGPORT} \
--host=${PGHOST} \
--dbname=${PGDATABASE}
plog "Perform pgbench"
${PGBIN}/pgbench --client=${F_CONNECTIONS} \
--jobs=${F_THREADS} \
--time=${DURATION} \
--protocol=prepared \
--username=${PGUSER} \
--port=${PGPORT} \
--host=${PGHOST} \
${PGDATABASE} >${F_LOGFILE}
}
The following results show that with pgbench, we were able to achieve 70,000 TPS with PostgreSQL 12.2 and EDB Advanced Server12.2.
PostgreSQL 12.2 Results
EDB Advanced Server 12.2
Please note: All the parameter settings and configuration mentioned in this document are specific to the pgbench test. We highly recommend tuning the system based on a benchmark defined for your workload and infrastructure.
Database parameters used for benchmarking: