Virtual machines are by far the most prevalent deployment model for data centers – close to 99%. What was once a revolutionary idea has transformed into the best practice standard for data center deployments. Virtualization has evolved into a very well established and understood technology that is also extremely robust and reliable. And, the largest growth in cloud deployment today is in fact still in virtual machines. But that is about to change.
Today’s great interest in containers is impossible to ignore, for both public cloud and on-premises deployments. It is in this space where we see lots of new applications, lots of experiments, and lots of development support — the seeds of innovation! However, while there is currently more development/test interest than production use, this is an area we can certainly expect to see the fastest growth.
Back in the day, the attraction of virtual machines was that it made the deployment of an application on a new piece of hardware very, very easy. Virtual machines isolated the application from the differences in hardware, making moving and deploying an application more streamlined than it ever was on bare metal. But, containers have taken this ease of deployment to the next level. The promise of containers is to make deployment even faster, give us better flexibility, and even further increase the density of applications that we can deploy on existing hardware, with a lot less overhead than virtualization.
This ease of deployment starts with the sheer simplicity of creating a container. Putting a container on a machine and turning it on is literally a matter of less than a second. A virtual machine is more robust, yes, but it does require significantly more time to set up. It is bigger, slower, and more cumbersome, often taking 10-15 minutes to be installed and started. By contrast, a container is deployed in sub-seconds. This ease of deployment has ignited a lot of interest, although market share for containers is still relatively low.
Most of the adoption for containers so far is occurring in DevOps environments, where agile deployment of new applications is essential. Containers are strongly favored for development and pre-production environments, yet growth of containers in production environments is inevitable. The impact on business process is becoming evident, because containers are quickly proving to be the ideal method to implement microservices.
In conventional service-oriented architectures, there is one database providing all services for the associated applications. Accounts receivable, accounts payable, CRM information, current orders, past orders, shipments — this is the type of information that has typically been resident in one big database. And, this design is still a relatively standard mode of operation for databases. But in this conventional architecture, organizations are required to deploy a new version of a very large database every time a new capability is introduced. Then, they have to test receivables, payables, and all the other services of the database to ensure that nothing is broken in the process. This is very difficult, and also very restrictive. Deployments cannot happen when business is being actively transacted, and they certainly can’t occur at the end of the month or end of the quarter. But, with microservices architectures, only small pieces need to be stopped and started, quickly reloaded, and rapidly deployed. By taking advantage of small services, and updating one little database on the side that only does one thing, it is possible to make database upgrades lightning fast and without disruption. Imagine the possibilities!
EnterpriseDB® (EDB™) can help customers succeed with containers and microservices in a number of ways. First, the EDB Postgres™ Platform is available as a container. EDB worked closely with Red Hat to certify a set of two Linux containers for the Red Hat Container Repository. One container is preconfigured with the EDB Postgres™ Advanced Server 9.5 database, EDB Postgres™ Failover Manager, and pgPool for load balancing. This container for the EDB Postgres Platform allows the database container image to be configured as a high availability cluster with built-in load balancing. The other container is preconfigured with EDB Postgres™ Backup and Recovery. Together, these containers offer a highly resilient enterprise-grade database solution for deployment of the EDB Postgres™ Platform.
Second, we work closely with our customers to share best practices for how to effectively leverage containers to successfully go from big monolithic service apps to microservices. We have helped many of our customers achieve this goal, and our knowledge on this subject has been earned through real-world experience inside diverse customer environments. Has someone asked you to convert an elephant to 1,000 rabbits? We can show you how.
Finally, through our Architectural Roadmap and Solution Blueprint Service, we can help you design a blueprint – customized just for your environment – that outlines how to get from where you are today – a world where databases are large, slow and cumbersome — to a world where small, effective, nimble deployments are the new normal.
Marc Linster, Ph.D., is Senior Vice President, Product Development, EnterpriseDB.
Before joining EnterpriseDB, Marc spent almost 4 years at Polycom, the leading maker of video communications equipment, focusing on the Services Supply Chain, Business Intelligence, Customer Data Management and Cloud Solutions. Prior to Polycom, Marc led a supply chain consulting and systems integration company working with customers in the US, Canada, France, Germany and Switzerland. This tapped the expertise he developed at Avicon Group, where he spent six years, first as chief technology officer then as vice president of operations, developing an extensive background in management, business consulting, systems integration, data management and business intelligence. Marc holds a Ph.D. (Dr. rer. nat) in Computer Sciences from the University of Kaiserslautern in Germany.