Businesses are putting their applications and sites in the cloud assuming that the ubiquitous concept of the cloud will ensure application and data availability. Recently, Europe’s largest cloud provider, and the world’s third largest had a fire that destroyed one of 27 of their global data centers. Luckily, no one was hurt and the local authorities managed to get it under control quickly.
The downside is that approximately 3,400,000 sites went offline and all of the data stored within the facilities is now completely lost. Businesses including ecommerce, banking, news services, and even government websites went dark. It is estimated that it will take 2 weeks plus to restore some functionality at this site for the other connected facilities and it will take a lot of effort to migrate affected customers to other locations that have limited available capacity.
For these 3.4 million sites and their owners, the extended downtime is hard to imagine. Even harder to consider is the process of rebuilding their site and trying to recreate all the data lost.
This is exactly the type of disaster that technologists are thinking of when they create disaster recovery plans. It is important to develop strategies to eliminate single points of failure and mitigate risk when components fail. But the allure of the ubiquitous cloud creates a fog of complacency that the cloud provider will handle the availability of anything thrown into it.
The cost of the recovery for the disaster is going to be well over the possible preventative measures had they been implemented. It is hard for a business to comprehend and absorb this type of event when it actually happened to someone else and their sites ended up being collateral damage.
Load Balancing for always on service
Load balancing technologies address application availability and can ensure that the site is always available under most catastrophic situations. Global Server Load Balancing (GSLB) technology allows for automated multi-site availability and redundancy. GSLB provides the ability for end users to access an available instance of the application when there is a failure occurs. The user will not even know that their request was routed to a different facility. Even a major data center fire will not take an application offline.
For all businesses, it is important to consider load balancing technologies in their IT architectures as an essential service. This combined with duplicated servers and database replication to the DR sites will help prevent future pains.
Kemp can help
Kemp focuses on load balancing technologies to ensure application availability, scalability, resiliency, and security. We work with many companies to build disaster recovery plans and implement IT architectures to support their environments.
Texas’ Harris County District Attorney’s Office is an example where Kemp worked to develop a strategy to ensure their IT environment never went down even in the event of a hurricane and flooding of their data center in the basement of one of their buildings. Earlier, we sat down with their IT administrator and had a conversation about their disaster recovery project and its impact on the office’s IT availability.
If you are one of the affected sites and are looking to rebuild with a disaster recovery plan in place, or you have a site in the cloud but have not considered disaster recovery scenarios, I strongly recommend that you look at load balancing solutions to keep your business online and always on. You can download a free trial with full access to the world-class Kemp support team at https://kemp.ax/try.