Memory Matters

August 06, 2014

Database performance and hardware selection are complicated topics, and a great deal has been written on that topic over the years by many very smart people, like Greg Smith, who wrote a whole book about PostgreSQL performance.  In many cases, the answers to performance questions require deep understanding of software and hardware characteristics and careful study and planning.

But sometimes the explanation is something very simple, such as "you don't have enough memory".

There's a very old joke I must have read for the first time at least ten years ago, which you can still find circulating around the Internet.  It goes approximately like this. Abraham (the biblical patriarch) intends to install Windows XP on his 386 PC.  Isaac points out that the machine doesn't have enough memory to run Windows XP.  Abraham replies: "Do not worry, my son; God will provide the RAM."

I'm not sure how well this joke has aged.  I remember a time when discussion of buying a new computer tended, at least in my family, to involve a lot of discussion of how big the hard disk was and how much RAM it had.  These days, most people I know assume that if they buy a new computer, the memory and storage will be sufficient for their needs.  In many cases, that assumption is valid.  If you're anything like me, you probably know how much flash storage your smartphone has, but not how much actual RAM it has.  And that's OK, because there's probably no reason you should care.

But you should care a lot how much RAM your database server has.  Over and over, I've seen people running very large databases on systems with very small amounts of RAM.  Because their access is concentrated in a small percentage of the database, they experience very high cache hit ratios (well over 99%), little I/O, and generally good performance, despite the fact that the database is anywhere between 4 and 40 or more times the size of physical memory.

Then something happens.  It could be a pg_dump or a hot backup, which involves reading the whole database.  It could be an anti-wraparound vacuum on a large table.  It could be a bulk load.  It could be an additional task running on the machine.  Whatever it is, it pushes some database pages previously cached by the operating system out of the operating system page cache.  The next access to those pages requires reading them from disk.  The cache hit ratio drops, and system performance drops dramatically.  On Linux, where processes that are waiting for I/O count towards the system load average, an enormous spike in the system load average often results.

Under any circumstances, reading from disk is vastly slower than reading from memory, but reading data from disk sequentially is 10 to 100 times faster than random I/O.  Unfortunately, it's often the case that the task which is evicting data from memory is writing data sequentially while the underlying database workload is typically accessing some working set of pages in a more-or-less random fashion.  The result is that data is removed from the cache at a vastly higher rate than it can be read back in.  Even after the bulk operation terminates and the cache-purging ceases, it can take a painfully long time - sometimes many hours - for random I/O to bring all of the hot data back into memory.

All of this pain can be avoided by buying enough memory.  It's true that database sizes are growing larger and larger, and more and more people have terabyte or even petabyte-scale databases that can't ever be cached fully in RAM.  But it's also true, I think, that the number of people who can fit their database in memory today, for a reasonable price, is larger than ever before.  I've seen systems recently that support up to 192GB of memory per CPU at very reasonable prices, and I've run across or been told about systems with 1TB or even 2TB of RAM that are described as commodity hardware.  If your PostgreSQL database is smaller than that, and you don't have enough RAM to keep all of it in memory at all times, why not?

There are a number of good answers to that question.  For example, maybe your database is very write-heavy, so that performance is limited by the rate at which data can be flushed to disk.  In that case, adding RAM might not help very much.  Or, maybe your database executes mostly batch jobs, so that you can control the number of tasks running at one time, and aren't sensitive to query latency.  Or, maybe your workload is heavily CPU-bound, so that even if the operating system cache is wiped out completely, the impact is not too severe.  But if you don't have an answer to that question, you probably ought to just buy more RAM.

Two final points to consider:

1. There is often a dramatic performance difference between having enough RAM, and not quite enough RAM.  For example, consider repeatedly reading, in a sequential fashion, 16GB of data.  If you have 15GB of RAM available to cache that data, the operating system will apply an LRU policy, and every read will require a physical I/O.  Your cache hit ratio will be 0%.  But if you have 17GB of RAM, all of that data will stay resident, and your cache hit ratio will be 100%.  Although the analysis is complicated, essentially the same logic applies to data which you access randomly rather than sequentially.  If the access is relatively uniform, cache hit ratios will surge, and performance will increase dramatically, only when you have enough memory to fit all of the frequently-accessed data.  If adding memory doesn't seem to help, it's possible that you just haven't added enough.

2. It is often hard to tell how much RAM you really need for good performance, which means you might not realize that you're running low until things really take a turn for the worse.  Disk space utilization is progressive, so it's easy to monitor utilization.  CPU utilization fluctuates more, but you can take a measurement during a busy period to get an idea.  Memory is different: because of the way operating system and PostgreSQL caching works, it's likely that substantially all of your memory will be in use all the time, but it's very hard to tell how much of is being used for data that is only accessed occasionally and how much of it is being accessed frequently.  Because RAM is now relatively cheap, it's a good idea to err on the high side.

Share this