We are beginning to see deployments of relational database management systems on Kubernetes accelerate. In fact, a recent survey from a Kubernetes provider predicts that databases will be the ‘killer app’ for container environments in 2019.
To support our customers accelerated adoption, EDB Postgres™️ on Kubernetes is now Red Hat container certified and available directly from Red Hat’s catalog. Red Hat OpenShift is a leading Kubernetes platform for container orchestration. If you are building cloud-native applications, you can now use EDB Postgres on Kubernetes as your general-purpose database orchestrated by Red Hat OpenShift.
Benefits of Red Hat Certification
The container certification process includes rigorous testing and validation, all of which are designed to give you peace of mind knowing that:
Postgres images come from a trusted source (Red Hat)
Images have been scanned for known vulnerabilities
Images are compatible across Red Hat platforms, including OpenShift, whether on virtual machines, bare metal, or a cloud environment
The complete stack is commercially supported by EnterpriseDB and Red Hat
Value of EDB Postgres on Kubernetes
Database management systems are stateful - they need to maintain information before, during, and after every interaction with an application. Containerizing a database is not as straightforward as containerizing stateless applications. One of the big challenges with any database, and more so on Kubernetes, is maintaining high availability. Containerized databases in production generally separate the database engine from the physical database files on shared storage. Kubernetes is optimized to maintain availability of the pod, if the pod running the database service goes down, it will be restarted, but may not spin up fast enough to meet the availability requirements of your tier 1 enterprise application.
More importantly, Kubernetes treats pods running the same service as identical. This means if you are running a master and multiple database replicas all the pods in the Kubernetes cluster are treated like cattle as opposed to pets. If you are not familiar with the cattle vs pets analogy - here is a refresher. Pets are well known servers, that are indispensable or unique, cradled with care so that they can never be down. Cattle are servers built with automated tools and disposable, so if they fail or are deleted they can be replaced with a clone immediately. Today, for the most part, databases are pets.
With Kubernetes providing pod availability for the cluster, and treating our database containers as cattle, doesn’t this make databases highly available? No, at least not for the tier 1 enterprise applications that require four to five nines of unplanned downtime--this is roughly 4.3 minutes a month for 99.99% and 26.3 seconds a month for 99.999% availability. When a database server is restarted it needs to connect to storage and perform crash recovery which together is almost guaranteed to take more time than availability requirements allow.
The classic resilient Postgres architecture with a master and multiple standby replicas allows for failover that meets availability requirements should the master become unavailable. In this architectural pattern, a replica is promoted to be the new master, client connections to the database are reestablished, and high availability is maintained with another replica while the failed master is recovered. However, Kubernetes is treating pods as identical, meaning it does not distinguish between a pod running as the master or as a standby replica.
This means Kubernetes alone cannot provide high availability for databases.
Recently introduced functionality in Kubernetes addresses the storage orchestration needs of databases as stateful applications. By using Kubernetes StatefulSets, the database engine will maintain a persistent volume claim to the backend database storage, so when a pod restarts, it gets mounted to the correct physical storage and data is not lost.
In addition, Kubernetes anti-affinity rules make sure that your master and replicas run on different nodes so that a common failure does not result in simultaneous, multiple database server container failures and serious down time. EDB Postgres on Kubernetes supports both anti-affinity and StatefulSets. But if we want databases to be like cattle and minimize downtime - how do we address the failover challenge?
EnterpriseDB solves the high availability challenge with EDB Postgres Failover Manager (EFM). In the event that the master pod goes down, EFM recognizes a failure whether due to hardware, network, or software, and elects the correct replica in the cluster as the new master. EFM provides continuous health monitoring for streaming replication clusters, detects and automates failover based on predefined user rules to minimize downtime. EnterpriseDB has done the hard work to tightly integrate all this functionality so it works seamlessly and without manual intervention by the user.
Another major consideration is that while open source is a fountain of innovation, it can lack capabilities enterprises need. EDB Postgres Advanced Server is built on open source PostgreSQL and adds enterprise security, enhanced performance, database compatibility with Oracle®, DBA and developer features for productivity. These capabilities support organizations in two ways - building greenfield or new cloud native applications entirely with containers, or replatforming existing applications.
The Future is Kubernetes
More and more cloud native applications are being developed by enterprises, therefore scaling, managing, and operating deployments including Postgres will accelerate. Kubernetes is at the center of this movement and is quickly evolving to becoming a key part of the enterprise infrastructure. And as a trusted partner in the Red Hat ecosystem EnterpriseDB will continue to invest in our Kubernetes offering so that OpenShift delivers on the promise of Postgres on any cloud.
Try EDB Postgres for free
Download it now from Red Hat container registry.