Solving the Cloud Database Investment Mystery

January 15, 2023

Moving databases to the cloud is currently one of the most popular IT activities in enterprises around the world. According to Gartner, spending on cloud databases is outpacing on-premises spend with 95% of new workloads being deployed in a cloud-native platform by 2025. And the benefits from being in the cloud are widely recognized, especially when it’s a fully managed service, where your cloud or database provider takes care of database management care and feeding while your IT team is freed up to work on projects that add value to your business.

But one of the biggest questions we get asked is, how do I balance my cloud database spend to drive the greatest return? Because pricing transparency and costs are different across the major fully managed database solutions, the only way to really understand how to balance this investment is to put all options on equal footing and benchmark the results. What’s needed is a methodology to measure platform performance across different platform and component options.

 

Faster or Cheaper?

Why do enterprises want to adopt a fully managed cloud database? The vast majority of our customers say they want to be able to do more for less cost. The best way to measure this and understand what it means is to consider the cost per transaction. So it can be much more than just choosing the solution that is faster or cheaper—one might run faster but at a higher price, and another may be cheaper per hour but take longer to run the same job.

In database cloud computing, there are four fundamental components required to run your database: compute, memory, disk, and network. Each of these components costs something from the cloud provider—Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc.

To best compare solutions, you should seek to minimize the variables and define as many constants as possible in order to determine what’s best for your business.

The cloud providers typically offer a combination of these components using so-called “t-shirt” pricing - that is, you can get different combinations of CPUs, memory and network capacities in different size combinations. A simple example is “small” with 4 CPUs with 32 GB RAM, “medium” with 8 CPUs and 64 MB RAM, “large” with 16 CPUs and 128 GB RAM, etc. These would be your variables. The constants should be the storage configuration, database size, session concurrency, and workload. By minimizing the variables, you can then more easily assess what your cost per workload is going to be with other things being constant.

 

Cost Calculation Challenges

There are several challenges with comparing costs across database and cloud providers. The trick is to find a way to do like-for-like comparisons.

Challenge #1: Defining the right compute combination. As an example, Amazon offers 55 different combinations of EC2 machines with 8 cores, and the price ranges from $102 to $825 per month. This is a huge range, and how can you know what is the optimal choice?

Challenge #2: Defining your storage needs. Cloud-based storage has the greatest impact on cost. Each storage option has different attributes for IOPS (input/output per second) and throughput, typically measured in Mbps (megabits per second). Part of the challenge here is understanding what you get and what the “maximum up to” is. The analogy is for automobiles where the manufacturer reports estimated miles per gallon, and notes how “your mileage may vary.” How you drive, what kind of gas you use, and even the weather can affect these numbers so it is really difficult to get a true measure of performance. That’s the same scenario in cloud database cost and performance estimating.

Challenge #3: Understanding storage costs. Suppose you think you need 315 GB of storage. Depending on the cloud, you will typically have to pay for 512 GB of storage even though you only intend to use 315 GB—because the next size down only goes to 256 GB. But you also need to consider what IOPS and MB per second you need - and of course, these all have significant and immediate cost implications. Furthermore, the cost between Amazon and Azure is calculated differently, so if you are using multiple cloud providers, it compounds the challenge to do a like-for-like comparison.

 

Benchmarking Cloud Cost/Performance

If you are overwhelmed by all the potential knobs that can be turned to figure out what your optimal cost per workload should be and therefore budget for your needs, you’re not alone! It is a complex subject and requires some serious thought and planning to get a true picture of cloud database costs so that you can plan your investment and ensure you are truly getting more done for less. Fortunately, we have done this work for you and have developed a benchmark methodology to measure platform performance for the major fully managed Postgres services.

We will be discussing this in a webinar on December 14, 2022. This virtual event will be a master class in understanding how to optimize your cloud database investment. We’ll discuss the challenges in more detail, and describe a sizing methodology that can help remove the mystery and offer a like-for-like comparison from the different fully managed Postgres offerings. The webinar will include a live benchmark of Postgres compared across Azure, AWS, and Google Cloud which you won’t want to miss.

Sign up to attend our webinar here!

Share this

Relevant Blogs

More Blogs

AWS RDS PostgreSQL Deployment with pgAdmin 4

pgAdmin 4 is the leading Open Source management tool for monitoring and managing multiple PostgreSQL and EDB Advanced Server database servers. pgAdmin version 6.6  has introduced the new functionality for...
August 17, 2022