How to Benchmark PostgreSQL Using HammerDB Open Source Tool

April 27, 2020

 

This article discusses how to perform benchmark tests using the open source tool, HammerDB, as an alternative to using pgbench. We will review:

  • Hardware specifications
  • Operating system configurations
  • PostgreSQL configurations
  • Benchmarking specifications
  • Benchmarking results

The standard tool used to benchmark PostgreSQL is pgbench. In an earlier post, we discussed how we can tune and benchmark PostgreSQL using this tool. Although pgbench is widely used for benchmarking, as per the documentation, pgbench tests a scenario that is loosely based on TPC-B. According to the TPC council, TPC-B doesn’t reflect any real online transaction processing (OLTP) scenario—it’s meant to be a stress testing tool. When pgbench tests are done using the default scripts, the server is bombarded with a huge number of similar and short-lived transactions that get committed eventually.
 

TPC-C Benchmarking of PostgreSQL

The TPC-C benchmark is more complex because of its multiple transaction types, more complex database, and overall execution structure. It was approved by the TPC council in 1992 as an OLTP benchmark. The benchmark involves a mix of five concurrent transactions of different types and complexity either executed online or queued for deferred execution. In this post, we will talk about how we’ve performed TPC-C benchmarking with PostgreSQL using the widely accepted open-source benchmarking tool HammerDB. HammerDB has become so popular that the open-source project is now being hosted under TPC-C council’s GitHub repository.

 

Hardware Specifications

For the test, we’ve used a stand-alone system with the following specifications:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                128
Online CPU(s) list:   0-127
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             8
NUMA node(s):          8
Model name:            Intel(R) Xeon(R) CPU E7- 8830  @ 2.13GHz
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              24576K
RAM:                    500GB



Operating System Configurations

Here are the operating system specifications that we used for the test:

Operating system                  CentOS 7 x86_64
Data disk scheduler		mq-deadline
Data disk readahead		4MB
transparent_hugepage	enabled = false, defrag = false
File system			data=ext4, wal=ext4
vm.nr_hugepages		52600
vm.swappiness		1
net.core.somaxconn		1024
vm.max_map_count		3112960

 

PostgreSQL Configurations

We also need to tune the PostgreSQL parameters for better performance. You can find the details of these parameters in the PostgreSQL documentations (related to Resource Consumption, Connections and Authentication and Write Ahead Log). 

shared_buffers				100GB
max_connections				1000
min_wal_size					20GB
max_wal_size					100GB
checkpoint_timeout (secs)			900
maintenance_work_mem			1GB
checkpoint_completion_target                 	0.9
effective_io_concurrency			200
synchronous_commit				on
log_checkpoints				on
wal_compression				on

 

Benchmarking Specifications

We’ve installed HammerDB 3.2 from the GitHub repository. Here is the script to load 500 warehouses of TPC-C data in a running PostgreSQL server:

>> cat pgschemabuild.tcl

puts "SETTING CONFIGURATION"

global complete
proc wait_to_complete {} {
global complete
set complete [vucomplete]
if {!$complete} {after 5000 wait_to_complete} else { exit }
}

dbset db pg
diset connection pg_host localhost
diset connection pg_port 5432
diset tpcc pg_count_ware 500
diset tpcc pg_num_vu 50
diset tpcc pg_partition false
diset tpcc pg_superuser postgres
diset tpcc pg_superuserpass postgres
diset tpcc pg_defaultdbase postgres
diset tpcc pg_user tpcc
diset tpcc pg_pass tpcc
diset tpcc pg_dbase tpcc

print dict
buildschema
wait_to_complete


And, we’ve configured the test script as follows:

>> cat pgrun.tcl

#!/bin/tclsh
proc runtimer { seconds } {
set x 0
set timerstop 0
while {!$timerstop} {
incr x
after 1000
  if { ![ expr {$x % 60} ] } {
          set y [ expr $x / 60 ]
          puts "Timer: $y minutes elapsed"
  }
update
if {  [ vucomplete ] || $x eq $seconds } { set timerstop 1 }
    }
return
}

puts "SETTING CONFIGURATION"
dbset db pg
diset connection pg_host localhost
diset connection pg_port 5432
diset tpcc pg_driver timed
diset tpcc pg_rampup 5
diset tpcc pg_duration 25
diset tpcc pg_vacuum true
print dict
vuset logtotemp 1
loadscript
puts "SEQUENCE STARTED"
foreach z { 1 16 32 64 128 256 512 } {
puts "$z VU TEST"
vuset vu $z
vucreate
vurun
runtimer 9000
vudestroy
after 5000
}
puts "TEST SEQUENCE COMPLETE"
exit


We’ve created some scripts to automate the HammerDB benchmarking work. They can be found in the EnterpriseDB GitHub repository.

 

PostgreSQL Benchmarking with HammerDB Results

As mentioned earlier, we’ve performed the TPC-C test with 500 warehouses with test duration 25 minutes and ramp up time 5 minutes. All the readings have been taken as the median of three runs.



NOPM stands for number of orders per minute, and TPM stands for transactions per minute. TPM includes both committed and rolled-back transactions. As per the HammerDB guidelines, there should be a close ratio between TPM and NOPM. In fact, any divergence will indicate some error in the test setup. As we can see from the results, there is a huge performance difference between PG 9.5.20 and the higher versions of PostgreSQL as we increase the number of clients. In fact, the curve deepens for PG 9.5.20 along the x-axis, whereas the curves for the higher versions remain unaffected with the number of clients.

In our previous benchmarking post, we summarized a few potential commits that might have contributed to the performance gain from PG 9.5.* to higher versions. The results here indicate these performance improvements as well. As our future work, we would like to explore the bottlenecks in the higher versions that are restricting the curve from going further up as we increase the number of clients.

 

Share this

Relevant Blogs

What is pgvector and How Can It Help You?

There are a thousand ways (likely more) you can accelerate Postgres workloads. It all comes down to the way you store data, query data, how big your data is and...
November 03, 2023

More Blogs

pgAdmin CI/CD

Almost exactly three years ago I wrote a blog on my personal page entitled Testing pgAdmin which went into great detail discussing how we test pgAdmin prior to releases. Back...
August 24, 2023