A decade ago, most companies faced financial challenges related to the dot-com bubble burst. Today, we’ve just gone through what was likely the worst financial crisis we’ll see in our lifetime. Whatever the reason for the challenges, it’s clear that some technology vendors have been unable to respond to the economics of the times. From an IT perspective, the fact that budgets are staying flat or growing slowly doesn’t jibe with the expensive, ever increasing costs of infrastructure software—the spend to keep things running, the plumbing.
Most companies spend almost 75 percent of their budget on the plumbing, and in fact, the database is typically the most expensive line item on any IT department’s software budget. This means companies don’t have enough money to spend on the future, on projects that can help them grow.
Companies are realizing that they cannot keep writing that big check for Oracle or DB2 or SQL Server. In order to effect real change, they need to do things differently.
The move from expensive, proprietary software to less expensive open source software is not a new phenomenon. After more than a decade of steadily increasing migrations, the move from Unix to Linux has become commonplace. Linux gave us ‘good enough’ technology at a fraction of the price of Unix. It also gave us confidence in a community that was intent on making rapid improvements on the technology so that, in no time, ‘good enough’ technology had become world-class, best performance technology. We have seen a similar shift away from WebLogic and WebSphere to open source middleware solutions, too. And now, we’re seeing the third leg of the infrastructure software stool move away from expensive, inflexible database vendors to open source-based solutions.
Today’s database migration away from the bloated, costly proprietary vendors is being led by the adoption of Postgres.
Consider the rising popularity of Postgres documented by DB -Engines, which ranks database popularity using a number of different criteria. While we know that Postgres is one of the most deployed databases in the world—it’s included with virtually every Linux distribution, and it’s downloaded several million times a year—Postgres is also popular in terms of searches, jobs requiring the database skill set, and other factors. Postgres was the second-fastest growing database in 2013–and the fastest relational database. Proof that enterprises are moving to the open source alternative.
Market Forces at Work
What really strikes me with the DB-Engines data showing Postgres’ popularity is the fact that it’s become so popular now. Remember, Postgres and the Postgres community has been around for a long time—since the 80’s. The Postgres community is actually older than the Linux community.
What companies have been realizing over the last few years is that Postgres has many of the same inherent characteristics of Oracle or DB2. In fact, Postgres was created from the same technical white paper that Oracle used to create its first relational database management system (RDBMS).
Developers on both ends of the workload spectrum are increasingly deploying Postgres to take advantage of enterprise-grade qualities. For small applications, including web applications, developers have begun to require more robust features as well as ease of use and rapid deployment. Large, enterprise applications demand advanced features, security, scalability and high availability for mission critical workloads. Postgres occupies an expanding middle ground. And the emergence of new workloads and the cloud will increasingly shape future needs and require RDBMS’s to expand in features and capabilities.
Traveling the Same Road
Remember when companies started migrating from Unix to Linux? They started with file and print servers then web servers and then moved to database servers and then deeper into the data center. The idea was to gain comfort and confidence with Linux and the team’s ability to administer the servers.
The same is logical for the database, but the database feels trickier because it’s housing the company’s most valuable asset—its data. So, while some companies jump in with both feet, an emerging best practice is a step-by-step progression of implementation. That is, companies are choosing to roll out a new database technology in an easy place—new application development, for example—not mass migrations. The result is cutting off the spend for higher priced database technologies, or for those with enterprise license agreements (ELAs), and deploying expensive licenses to the few workloads that actually justify the massive spend.
Many companies evolve past new application development quickly, and begin to replicate databases for reporting or business intelligence applications onto Postgres. Then they move into migrating and rewriting apps to achieve the really significant savings as teams gain comfort with development and administration.
With the emergence of new technologies in Postgres—JSON support, HStore, etc.—more workloads make sense for Postgres, too. So while Postgres is great for transactional workloads, for which it’s long been known, it’s also adept at web apps, data stores, reporting, NoSQL and big data projects, and data warehousing. And, with Foreign Data Wrappers and the PPAS Connector for Hadoop, even if you want to use other databases, Postgres is the perfect foundation or connection point for all of your data sources.
Postgres seems to have fast appeared on the scene but this building momentum has been long in the making. Companies have begun to see that Postgres is headed in the same direction as they are, and they have partners to help guide enterprise-class development. Sure, it means writing one extra check in the data center, but when it means slashing the number of zeros on that check compared to traditional vendors, it’s a welcome price to pay.