Beating the Cloud Migration Busters

January 12, 2015

Contributed by Fred Dalrymple

This story is playing out all over web. You have a successful web application that you’re hosting on a system that can’t keep up with the application’s growth. You know the cloud offers flexibility and scalability that will allow your application to adapt to the growth. However, you’re concerned that moving the application will be risky and interrupt the service that your customers depend on.

It’s a common conundrum, to be sure. But there are migration strategies that will ensure you’re successful. These will be covered during a webinar, Migrating a Production Database to the Cloud, on Jan. 21 (find more info or register here). Below is a summary of some of the information we’ll cover.

Clouds Rising Above the Challenge

Here at EnterpriseDB, we work with companies of all sizes and in all kinds of industries move systems from on-premise to the cloud. The cloud is gaining in acceptance for a wide range of applications. For one thing, cloud security has improved significantly so it’s not the concern it once was. And cloud platforms have begun to offer all the services you need to drive your applications. Use of the cloud is not limited to applications like informational websites and CRM tools. Even global enterprises are more comfortable in moving mission-critical applications to the cloud, to join dev/test and other pilot applications that have already proven cloud viable. For their part, customers have come to expect that the systems they need are as close as their browser.

However, moving an existing production system to the cloud presents challenges beyond what’s required to deploy a new system. The source and target environments should be as similar as possible so the migration process can emphasize moving the data without having to manage system differences as well. The length of possible downtime has to be considered. It can range from being a mild annoyance to customers to being a business-killer. Further, the cloud system must equal or better the performance of the original system – at least if you want to make your users happy. Perhaps the greatest challenge is to complete the migration with little or no downtime while also ensuring that no data is lost during the transition.

What options do you have for moving your data from the source to target systems? Generally, there are two choices. You can dump your database on the source system, move it to the target system, and load it there. To ensure that the resulting target system is consistent with the original, you must take the application down for the duration of the migration. In many cases, that’s just not possible.

The other approach is to use streaming replication. With streaming replication, the source system remains online and continues to service customers while content is streamed to the target system.  When the target system has completed loading the content from the source system, the source system can be shut down and the target system can be activated. The cutover from the source to the target systems will be significantly shorter than with the dump-and-load approach.

Unfortunately, not all database systems support streaming replication. Although EnterpriseDB’s Postgres Plus Cloud Database does, both ends of the migration must support streaming replication for this approach to work. Without streaming replication, dumping the source database and loading it into the target is the only choice. There are techniques for optimizing dump-and-load approaches to make them perform—theoretically—closer to streaming, but they will still interrupt customer access to the system for a longer period than with a pure streaming approach.

Temporary Tweet Losses

There are cases when the source and target databases don’t need to be immediately consistent with each other. Twitter probably wouldn’t fail if a few thousand tweets were “lost” for a brief period of time. But working with inconsistent data for a few thousand customer payroll deposits would be a serious problem. The reality is that many of today’s applications don’t have to lose transactions. It may just take a little time for consistency to be achieved as the final transactions are processed after the bulk dump/load.

There are strategies to reduce the impact of a relatively longer downtime, such as scheduling the migration for off-hours (assuming you have off-hours), but it’s always worthwhile to optimize the migration to reduce the downtime.

Tuning and Setup

Once your application has been migrated to the cloud, there’s still the matter of ensuring great performance and high availability. The good news is that with the right framework, the cloud, and Postgres Plus Cloud Database in particular, database administrations can easily address those issues. Scaling up, scaling out, and automatic failover are standard features. Even if you start your new system with a more modest specification than you need, it’s easy to quickly scale up and out to give your customers a big step up in performance.

Migrating a production application from on-premise to the cloud requires careful planning and execution. While I’ve covered just a few of the issues here, the webinar on Jan 21, Migrating a Production Database to the Cloud, will dig much deeper.

Please visit our site for more information on the webinar and how to register, or contact us for more information.

Fred Dalrymple is Product Manager, Postgres Plus Cloud Database at EnterpriseDB.

Share this

Relevant Blogs

More Blogs

EDB BigAnimal Now Available with Managed PgBouncer

EDB recently integrated PgBouncer, a lightweight connect pooler, with BigAnimal, our fully managed cloud PostgreSQL database as a service. This built-in integration helps developers and DBAs more easily...
May 25, 2023