How Postgres Workload Reports Help Optimize Database Operations

September 18, 2024

Learn how this Postgres performance tool boosts DBA productivity

To successfully migrate applications from Oracle to PostgreSQL, organizations must provide tools to those most impacted by these changes – DBAs, developers, and IT.

Postgres Workload Reports is one tool that enables migrations to open source database solutions. It uses Oracle’s Automatic Workload Report (AWR) formatting for familiarity, advances database performance analyses, and improves diagnoses and troubleshooting.

The Postgres Workload Reports feature is available for PostgreSQL and EDB Postgres Advanced Server subscription customers.

Challenges to DBA Productivity

Oracle's AWR Reports are a standard, trusted performance monitoring and optimization tool in enterprise database environments, providing detailed insights for diagnosing and tuning database operations. The absence of a similar PostgreSQL tool can lead to a steep learning curve for Oracle DBAs when migrating to Postgres.

Transition challenges can be worse in mixed database environments, where administrators oversee both Oracle and Postgres databases, because the specific format and depth of data in AWRs are not replicated in Postgres reporting.

The inability to summarize, view, and share this information is not just a feature gap – it’s a significant barrier to diagnosing Postgres performance. These challenges impact DBA productivity in Postgres environments and, as a result, can lead to potential downtime of business applications.

Improving Postgres Performance Diagnostics

Postgres Workload Reports bridge the gap between Oracle and PostgreSQL diagnostics. It’s a tool that mirrors Oracle AWR’s functionality and depth but is purpose-built for modern Postgres environments.

Postgres Workload Reports provides top-tier performance diagnostics for the Postgres ecosystem, with detailed metrics, easy-to-understand reports, and an AWR-like user interface to enhance organizational adoption. It equips DBAs with the tools needed to ensure high performance Postgres, supporting the uninterrupted operation of essential business applications.

By providing wait state information, Postgres Workload Reports is a valuable feature for DBAs. These reports also offer insight into top SQL statements and Postgres configuration information. As a result, Postgres Workload Reports allow DBAs and developers to quickly understand and resolve database performance issues by evaluating a single report in either Markdown, PDF, or HTML.

Postgres Workload Reports can also be sent to more experienced DBAs and experts for further analysis, enhancing organizational Postgres performance diagnostics and troubleshooting buy-in.

A Unique Postgres Performance Tool

What is new and unique about Postgres Workload Reports compared to other Postgres performance tools is that they allow you to evaluate what queries were waiting at a particular time. These analyses are made possible by the edb_wait_states extension. For instance, consider this output from a PWR report run using HammerDB against a Postgres server.

First, let’s examine the server information section of the report:

Server Information

Shows information about the server version.

  • server version - shows server version number.
  • architecture - architecture for which it is built.
  • system_identifier - unique identifier for the cluster.
  • redwood_mode - whether the cluster is created in redwood mode.
  • current_user - current user generating the report.
  • actual_start_snap_ts - actual start timestamp from which we have stat data available.
  • actual_end_snap_ts - actual end timestamp till which we have stat data available.
  • snapshot_duration - duration of the snapshot.
server version PostgreSQL 16.2
architecture x86_64-pc-linux-gnu, complied by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit
system_identifier 7298803313971566730
redwood_mode False
current_user pg_monitor
actual_start_snap_ts 2024-03-15 07:44:18.588779+00
actual_end_snap_ts 2024-02-15 07:54:16.587401+00
snapshot_duration 00:09:57.998622

General information about the server is readily available. However, special attention should be given to two specific items: the actual_start_snap_ts (2024-02-15 07:44:18.588779+00) and the actual_end_snap_ts (2024-02-15 07:54:16.587401+00).

This indicates that the database is being evaluated over a 10-minute period. During this timeframe, we are also looking at the amount of dbtime. In this case, it was 184 seconds. This means that during this 10-minute snapshot, the database server was either executing queries or waiting for something so it could continue executing queries.

So, the questions that have perplexed many of our Postgres performance troubleshooting efforts are:

  1. Is the database executing queries or waiting on resources such as locks, disk I/O, or CPU?
  2. If the system is waiting, what is it waiting on?

This Postgres Workload Report helps us answer these two questions, as seen in this top wait events report.

Top wait events

Show total wait time on top wait events in seconds.

See the Wait Event Table Information here: 28.2. The Cumulative Statistics System.

  • waitevent - waitevent name (waitevent name CPU means time spent working on CPU or a non-wait time).
  • wait_class - waitevent type.
  • waittime - waiting time in seconds spent on this waitevent.
  • pct_dbtime - %dbtime spent waiting on this waitevent type.
waitevent wait_class waittime pct_dbtime
ClientRead Client 124 67.3913043478261
CPU N/A 43 23.369565217391305
WALWrite LWLock 4 2.1739130434782608
DataFileExtend IO 3 1.6304347826086956
WalWrite IO 2 1.0869565217391304
relation Lock 2 1.0869565217391304
extend Lock 2 1.0869565217391304
SpinDelay Timeout 1 0.5434782608695652

Of the 184 seconds evaluated, 124 were spent waiting on ClientRead events. Referring to Section 28.2 of PostgreSQL documentation, this metric indicates that the system was waiting to read data from the client.

What could this mean? A variety of things, including:

  1. This server had a HammerDB benchmark running against it. One possibility – and in this case, the most probable conclusion – is that the client test machine was overwhelmed and could not respond to the server fast enough.
  2. The client was waiting for user input, such as a return from a prompt.

It doesn’t mean that something on the server is impacting the system’s throughput. Instead, the issue is with the client.

In looking at top SQL statements, you can see the following:

Top SQL statements

  • dbtime - total dbtime spent by the sql statement.
  • waittime - total waittime spent by the sql statement.
  • cputime - total cputime spent by the sql statement.
  • top_waitevent - waitevent name on which this statement spent maximum time.
  • query - actual sql query.

Top 10 sql statements sorted by dbtime (high to low)

dbtime waittime cputime top_waitevent query
448 128 320 SpinDelay COPY order_line (ol_o_id, ol_d_id, ol_w_id, ol_number, ol_i_id, ol_supply_w_id, ol_quantity, ol_amount, ol_dist_info, ol_delivery_d) FROM STDIN WITH (FORMAT CSV)
270 250 20 ClientRead COPY stock (s_i_id, s_w_id, s_quantity, s_dist_01, s_dist_02, s_dist_03, s_dist_04, s_dist_05, s_dist_06, s_dist_07, s_dist_08, s_dist_09, s_dist_10, s_data, s_ytd, s_order_cnt, s_remote_cnt) FROM STDIN WITH (FORMAT CSV)

COPY order_line (ol_o_id, ol_d_id, ol_w_id, ol_number, ol_i_id, ol_supply_w_id, ol_quantity, ol_amount, ol_dist_info, ol_delivery_d) FROM STDIN WITH (FORMAT CSV)  

This indicates that the bottleneck during this period involved reading a CSV file from STDIN on the client machine.

Getting information about what the server is waiting on has proven extremely difficult in the past. Postgres Workload Reports now makes it simple. You can evaluate wait events for any period you’d like and determine what issues the system may be facing.

Empower DBAs and Avoid Costly Outages

The inability to summarize, view, and share information about top SQL events, wait events, and session activity is a significant barrier to diagnosing database performance in Postgres. This Postgres feature gap has adversely impacted DBA productivity and, as a result, could lead to downtime of business applications. Now, Postgres Workload Reports equip DBAs with the necessary tools to ensure high performance, supporting the uninterrupted operation of essential business applications.

Postgres Workload Reports help DBAs understand system behavior nuances like no other Postgres tool can. This level of detail is invaluable for diagnosing and resolving complex database performance issues. Postgres Workload Reports help customers avoid costly outages by proactively optimizing Postgres, with a deeper understanding of what is happening on their revenue-generating databases.

Share this
How do I get 24/7 PostgreSQL support? chevron_right

EDB Remote DBA Services provide comprehensive, cost-effective 24/7 Postgres expertise. It allows you to be adequately staffed and focus on development projects that move the needle for your business.

Remote DBAs proactively monitor and address issues before they become problems to minimize unplanned downtime. They apply patches and secure systems, ensuring strong governance and smooth operations. EDB Remote DBA Services is ideal for reducing costs, optimizing systems, and enabling growth and transformation across your business.

What are the cost benefits of remote DBA services? chevron_right

Remote DBA services can:

  • Reduce unplanned downtime and ensure system continuity, saving time and resources on backups, upgrades, security patching, and database optimization
  • Provide 24/7 coverage without increased staffing and costs
  • Reallocate current DBA resources to strategic initiatives
What are the most recommended PostgreSQL support services? chevron_right

The following services are recommended to ensure peak performance:

  • Reports on ongoing environment health and activities
  • Strategic reviews on trends, uptime, business alignment, and opportunities
  • 24/7 monitoring and automated monitoring implementation
  • Real-time alerting across multiple dimensions and customized alert thresholds
  • Disaster recovery planning and production backup verification
  • Comprehensive performance optimization
  • Maintenance of high availability architecture
  • Encryption and secured access best practices
What is the role of a technical account manager? chevron_right

Technical account managers provide much-needed technical expertise for PostgreSQL, as well as value-adding services such as project management, database optimization, and tooling support. A technical account manager is especially helpful if your organization is new to PostgreSQL and interested in utilizing the database to its fullest potential.

What is tuning in PostgreSQL? chevron_right

Tuning in PostgreSQL means optimizing the database's performance and efficiency by adjusting various configuration parameters. This involves fine-tuning settings related to memory usage, CPU allocation, disk I/O, and query execution to ensure the database operates at its best. Effective tuning can significantly enhance query performance, reduce latency, and improve the overall responsiveness of applications that rely on the PostgreSQL database.

What are the different types of DBAs? chevron_right
  • System DBA
    Installs new versions of database management systems, applies maintenance fixes from vendors, sets system parameters, and tunes the operating system for optimal performance.
  • Database architect
    Focuses on creating logical data models, translating them into physical designs and creating backup and recovery strategies for new systems.
  • Application DBA
    Focuses on databases specific for certain applications or subsets of applications. Writes and debugs complex SQL queries, does performance tuning, and manages database changes.
  • Task-oriented DBA
    Focuses on highly specific, critical tasks such as backup and recovery.
  • Performance analyst
    Analyzes query performance, identifies bottlenecks, and recommends improvements.
  • Data warehouse administrator
    Sets up and configures the data warehouse environment and ensures the warehouse can handle complex queries and reporting needs.
What is the difference between a database administrator and a data administrator? chevron_right

A database administrator handles databases’ technical implementation, management, and maintenance. A data administrator handles data governance, integration, and strategic planning. The former uses their technical skills more, while the latter uses their analytical skills more.

What is data consolidation? chevron_right

This is integrating data from multiple sources into a single target location, like a data warehouse, creating a unified view of information. Doing so eliminates duplicates, standardizes formats, and ensures accuracy and reliability.

What are the main challenges of Oracle to PostgreSQL migrations? chevron_right

Numerous differences exist between the two databases, such as variations in data types, SQL syntax, procedural language, and a wide range of potential compatibility issues that may arise when applications are transferred.

How does EDB provide Oracle compatibility for PostgreSQL? chevron_right

EDB has developed the EDB Postgres Advanced Server (EPAS), an advanced version of PostgreSQL that provides built-in support for many Oracle-specific data types, object types, keywords, procedural constructs, and additional features. We regularly assess migration hurdles and introduce new compatibility attributes with each major release of EPAS. This solution is crafted to mirror Oracle’s code, drivers, and queries while providing essential support for specific Oracle features or workloads. Such capabilities could potentially cut down code and application rewrites by 80% or more. Furthermore, EDB Postgres Advanced Server ensures robust application security to mitigate migration risks and delays.

Enjoy Optimal Postgres Performance with EDB

Get comprehensive 24x7 support for your database operations

More Blogs

RAG app with Postgres and pgvector

Build a RAG app using Postgres and pgvector to enhance AI applications with improved data management, privacy, and efficient local LLM integration.
October 08, 2024

Mastering PostgreSQL in Kubernetes with CloudNativePG

Previewing EDB’s training session for PGConf.EU 2024, presented by our Kubernetes experts EDB is committed to advancing PostgreSQL by sharing our expertise and insights, especially as the landscape of database...
September 30, 2024

The Expanding World of AI and Postgres

It wasn’t long ago that AI was considered a niche topic of interest reserved for researchers and academics. But as AI/ML engineers with extensive research backgrounds entered the industry, AI...
September 25, 2024