5 Cloud Architecture Considerations That Every Enterprise Should Know

October 20, 2021

More and more applications are being built in cloud environments, for example on orchestrated deployments of virtual machines, load balancers and other services, and increasingly as microservices deployed as containers in Kubernetes and similar orchestration platforms. 

Whilst it's clear that deploying using technologies such as Kubernetes requires a different approach to a traditional "bare metal" deployment, in reality that remains true of any type of cloud deployment. Engineers need to understand how to make proper use of the services provided by their cloud vendor and architect their applications and deployments to make proper use of them to ensure data remains secure, highly available, and quick to access.

In this blog post, we will look at some of the key aspects of cloud hosted PostgreSQL instances and what you should consider when designing your deployments. Consideration is given to DBaaS services from the cloud vendors and those provided by database vendors, as well as "home rolled" deployments on virtual instances or in Kubernetes.

Note that the terminology used in this document is primarily that used in Amazon's AWS services. This is because their terminology is arguably the most well known and it makes sense to be consistent. It is not a recommendation to use AWS over any of the other cloud environments such as Google Cloud or Microsoft Azure. There are pros and cons to all of them that may or may not apply to your organization, and you should assess which offers the services that are the best fit for your requirements.

 

1) How do you keep your database secure?

In a traditional deployment of a database in a privately hosted environment a lot of emphasis is placed on the fact that the entire network and hosting facilities are not public and can only be accessed by authorised personnel (or applications) from within the organization. Trust is placed heavily on the physical security of the servers, and controls on the network such as border routers with strict intrusion prevention rules (firewalls) and intrusion detection.

When we deploy in the cloud we're running on someone else's servers, in someone else's datacenter, on a network that's designed to be accessed via the public internet.
 


Physical Security

The physical security of the systems is largely out of our control. We must rely on the cloud provider to take appropriate measures to ensure their facilities and the equipment within them are secure. When selecting a cloud provider, be sure to research and understand the work they've undertaken to ensure the security of your data. System Organizational Control (SOC) 1, 2, and 3 reports are commonly seen; these are audit reports defined by the American Institute of Certified Public Accountants that assess five "Trust Service Principles" (security, availability, confidentiality, processing integrity, and privacy) at different levels. Other standards to look out for include ISO/IEC 27001, 27017, and 27018.

 

Network Security

When running in the cloud, we cannot rely on our own border routers (or internal firewalls) to prevent connections to our database servers. Most large cloud providers provide services that can be used to provide equivalent levels of security, if utilised appropriately.

Virtual Private Cloud or VPC is a term originally coined by Amazon in AWS. In the early days of AWS, when you created a virtual instance it was effectively just a virtual machine connected to the internet. In order to provide additional security (as well as to help minimise use of IPv4 addresses) VPCs were created. These are essentially virtual private networks hosted in the cloud that can consist of one or more IPv4 or IPv6 subnets, spread across one or more availability zones - think datacenter - in a single region. 

VPCs can have virtual instances and other services such as Kubernetes clusters and Database as a Service (DBaaS) instances deployed in them which are not directly accessible from the internet. Access to specific services with a VPC can be managed through elastic IP addresses, load balancers, VPN endpoints and other similar services as required. A typical architecture might have one or more subnets running web servers, with public access to them being possible only through a carefully secured load balancer. The web servers might access a database instance hosted in a separate subnet within the VPC, with security rules to restrict access to only the web servers, and only to the database server port.

When deploying in the cloud, make sure that you understand how VPCs, subnets, routers and so on operate in the environment you choose. Isolate your database servers from as much as you possibly can, allowing access only from the servers that run your application. If your application doesn't need to be accessed by the general public, configure VPN endpoints so the VPC acts like an extension of your internal network, and block all other ingress.

Most cloud providers offer virtual firewalls that can be used to secure services and VPCs. Wherever this is presented as an option, make use of it to lock down access to the resource. Follow the principle of least privilege; deny all access except that which is required for operation. Don't forget to leave a secure path for your system administrators to login as needed to perform routine maintenance and fault diagnostics. This should be kept as independent from any other access paths (e.g. for users visiting a website) as possible; one option might be to only allow SSH access to the VPC from a dedicated VPN endpoint which only the administrators can connect to.

When your applications connect to the database server, consider using certificate based authentication. When using PostgreSQL this can be configured to provide mutual authentication in which both the client and server validate each other's certificates. This can help prevent man-in-the-middle attacks and "spoofing" of the database servers, should an attacker gain access to the VPC somehow. Use of certificates for application authentication is also arguably more secure and convenient than using password authentication.

Finally, it's worth noting that security is not just about keeping unauthorized users out of your application and database; it's also about ensuring your data remains available and accessible. Ensure you make use of the services provided by your cloud provider to perform regular backups - and make sure you test them! If you don't test your backups regularly, you should assume you do not have backups. It is also highly recommended to make use of database replicas as another type of backup (albeit, not one that can replace normal backups). Most cloud providers organise their infrastructure into multiple availability zones (data centers) within each geographical region. By keeping a replica of your database in multiple locations, you maximise the chances of being able to stay online in the event of an outage in any one location.

Securing databases in the cloud requires an understanding of the cloud environment and the way it's intended to be used. Whilst in traditional environments many aspects of the security are often handled by dedicated professionals, such as physical security guards, network administrators, database administrators and so on, in a cloud environment those tasks are partly handled by the cloud provider, and partly by what is often one DevOps team that needs to understand the entire deployment and its security architecture.

 

2) How do you keep your database performant?

Performance tuning PostgreSQL (webinar) in the cloud is largely the same as in other environments at the highest level, however, the differences become larger the further down the stack you go - and they can be quite dependent on the type of deployment you create.

If you use virtual instances to self-build and manage a deployment, then the opportunities are the greatest. You have full access to the operating system and kernel configuration, storage layout and provisioned IOPs to guarantee minimum performance (subject to what block storage options are available), the PostgreSQL installation and configuration, and of course, the database schema. You can also select from a much wider range of machine instance types, with the characteristics best suited to your needs. You can make that access available not just to your own team, but also to dedicated database support vendors. However, this type of deployment requires far more design work and effort to get up and running and manage than DBaaS solutions.

If you choose a DBaaS solution, your options are typically far more limited. Cloud vendors typically offer only a few instance types to choose from, give you no opportunity to tune the operating system or kernel, or to utilise different disks to optimise speed vs. cost for hot or cold data, and typically only allow limited tuning of the database server configuration. Whilst this level of control might be suitable for some applications, it may not be enough for very large or very high velocity applications.

DBaaS solutions built by third parties on top of the infrastructure provided by the cloud vendors may offer additional options, as well as the opportunity for the vendor to assist with more complex tuning needs than the big cloud vendors will. 

Deployments on Kubernetes can also vary wildly in the options available. The Kubernetes deployment itself may be provided as a service, or deployed manually onto virtual instances. The latter will give you far more control over the underlying operating system of course (such as the ability to select a specific kernel scheduler), but you still may be limited in what can realistically be tuned to meet the needs of demanding workloads — especially if other components of your application must share the same Kubernetes infrastructure. The deployment may also get particularly complex if you want to make use of tablespaces with different IO characteristics.

Weigh up the pros and cons of a deployment that is more complex to setup and maintain, but offers far more opportunities for tuning and customisation, over ones that can be deployed with a few clicks, but offer little in the way of options for low-level management. Factor in the level of support available and possibilities to get help with advanced tuning.

 

3) What failures do you need to tolerate?

Failures are inevitable, whether due to issues with the database software, operating system, or underlying virtual or physical hardware on which it all runs. Cloud environments make it very easy to have geographically diverse replicas, which can also be utilised for offloading report workload or load balancing read-only queries. More complex systems such as EDB's BDR (Bi-Directional Replication) can offer both read and write servers in multiple locations, as well as "always on" fault tolerant architectures.

For the most simple deployments, a single database instance with backups taken at a suitable interval — optionally with transaction log archiving to allow point-in-time recovery — may suffice. If your requirement for recovery time (RTO) is low, this may be a relatively cheap option. If your requirement for recovery point (RPO) — how recently in time data can be recovered to —  is long, even more can be saved by not archiving transaction logs.

For deployments where RTO is higher, replicas are essential; and if it's crucial that no committed transactions can be lost, synchronous replication to at least one node of a three node or larger cluster can be used.

Geographic redundancy (GRO) is also important in some environments. In the cloud, clusters can often be spread over multiple availability zones within a single region; you might think of this as having replicas in different buildings, in the same city. This protects against localised failures such as loss of power or connectivity to one of those buildings.

In deployments with a very high GRO, you might consider having one or more replicas in an entirely different region; typically a different state, country, or continent. This can protect against widespread failures of an entire cloud region from natural disasters, such as an earthquake.

Consider your requirements for recovery time, recovery point, and geographical redundancy when planning your deployments. Minimising downtime, minimising the amount of possible transaction loss, and protecting against large, wide-scale failures all add to the complexity and cost of your deployment.

 

4) What are the cost implications?

It's relatively easy to estimate the cost of running a PostgreSQL instance in the cloud; providers often provide a pricing calculator to help with this. However, it can be common to overlook or underestimate the cost of the storage required which can be surprisingly high, especially when provisioned IOPs are allocated. Make sure that you keep this in mind, as well as the fact that the storage costs may differ between DBaaS services and plain virtual instances with attached block storage that could be running PostgreSQL.

Data transfer between DBaaS instances and virtual instances within the same region is often free of charge, however, cross-region may not be, and internet egress usually isn't (although ingress is often free). Consider what the data flows will look like in your application infrastructure, and weigh the expected costs into your choice of cloud provider and application design. Network costs for database use are typically very low compared to the storage and compute costs (whether DBaaS or virtual instances), but it can get expensive if you have very high volumes of egress to the internet or cross region.

If you have one or more replicas, expect to multiply the cost for compute and storage by the number of nodes. Whilst some providers may offer discount rates for replicas, these are often very small discounts. If you're building your cluster yourself and have cross-region replicas, the network traffic from the primary to the replica may also incur additional costs.

Backup storage will also incur costs, which can be dependent on the size of your database as well as the amount of change since the previous backup. This can be extremely difficult to estimate, though thankfully the costs are usually relatively small.

Finally, if you're building your own cluster using virtual instances, you may need to "Bring Your Own Licence", depending on your choice of engine. Licencing costs are typically included in DBaaS offerings.

It can be extremely difficult to estimate the cost of running a database in the cloud. When running in your own datacenter it may be relatively simple; the cost of a number of boxes plus the power and cooling costs. In the cloud the number of variables is much, much higher, and dependent on usage patterns and application behaviours which may be very hard to predict.

 

5) How will you support your database?

Support can be critical with production systems; even with an expert team on staff, it’s unlikely they'll have the in-depth knowledge of the database engine that a specialist database company with a deep understanding of the code would have.

When using a DBaaS service, you will typically not have any access to the operating system or virtual hardware on which the database is running, which can make it difficult to diagnose and fix certain problems. In fact, you may not even have "true" superuser access to the database server. The major cloud providers also have limits to the support they will offer; typically, ensuring the database is up and running and you can connect to it is about as involved as they will get. Specialist database companies such as EDB may offer more options, such as remote DBA services, however they too will be limited due to the lack of access to the underlying platform.

If you build your own cluster on virtual instances then you have far more access; you can login to the instances, modify kernel parameters and disk configuration, and change any aspect of the Postgres configuration you like, with or without the help of a third party support vendor. However, as noted above, this is far less convenient than DBaaS services, requiring you to build and manage your own clusters, rather than deploying them with a few mouse clicks or a simple Terraform script or Ansible playbook.

It is worth considering third party DBaaS options from vendors such as EDB. These can offer all the convenience of a DBaaS service, plus you or the vendor (or both) will be able to access the operating system of the servers on which the database is running, allowing the vendor to provide a significantly higher level of support and giving you more flexibility.

Support is a requirement for many organisations, even if only as a safety net. Consider how much database level support you will receive from the cloud provider, and how much access you and your chosen database specialists will have to the underlying systems when choosing a deployment type.

 

Conclusion

Deploying in the cloud can offer a lot of flexibility, but that often comes at the cost of supportability or just good old fashioned money. It also requires a change in mindset of those deploying databases and applications; whilst many of the concepts of cloud environments mirror those of private environments, there are differences, and often those involved will need to understand topics that previously were handled by others.

The questions asked and answered in this blog post will help you understand some of the key considerations and tradeoffs that need to be made in order to successfully deploy databases in the cloud.

Want to learn more? See a preview of EDB Cloud.
 

Share this

Relevant Blogs

More Blogs

Solving the Cloud Database Investment Mystery

Moving databases to the cloud is currently one of the most popular IT activities in enterprises around the world. According to Gartner, spending on cloud databases is outpacing on-premises...
January 15, 2023