Home » Load Balancing in the cloud – What’s the Real Cost?

Load Balancing in the cloud – What’s the Real Cost?

=Load Balancing in the Cloud


Organizations are increasingly dependent on their customer facing applications making the performance and resilience of these applications of paramount importance. Downtime and slow response times are not acceptable in this era where speed is the dominant factor.

Today, customers have limited patience. A study shows that if a website application does not load within a one second, the consumer’s thought process is interrupted and they switch to another website.

The criteria for finding the optimal load balancing solution is not solely based on selecting the right vendor at a cheap rate. The monetary cost is only one piece of the puzzle. When it comes to outages and the operational burden of complexity, if the correct load balancing vendor is not selected, there could be additional costs involved.

The deployment model of one load balancer serving all applications limits flexibility. The cloud is there to provide flexibility, but in the absence of flexible elastic load balancing, organizations cannot get the most benefit from the cloud.

To overcome these challenges, KEMP Technologies offers a per-app application delivery controller (ADC) model with the benefits of licensing based on total instance throughput and not based on the number of single instance deployments. This means administrators can design a range of high availability requirements that use multiple instances while only paying for usage.

Application Design Variety

Traditionally, a common deployment consisted of a single application per server. The installation of additional applications would require an additional physical server.

Eventually, with the birth of virtualization, an abstraction layer between the application and the host was established. This facilitated applications to be placed in different virtual machines (VMs), sharing the same physical server resource such as CPU, RAM and Hard Disk.

Along with the abstraction layer, we witnessed the birth of multi-tier applications. Multi-tier applications could be spread across different geographical areas requiring load balancing and security services between each tier.

As the importance of applications grew, so did the need for highly available designs. The majority of applications require redundant load balancing instances, not positioned solely at the front end of the application stack but within each tier.

Complexity & Blast Radius

A blast radius limits the effect of one application outage effecting another. As a rule of thumb, network designs should aim for a small blast radius. This may come as separate failure domains at the site level, or confining brittle Layer 2 islands to certain parts of the network.

The complexities of a perfectly highly available network and security architecture go out the window when you have one load balancer serving all the application requests. As a result, one large load balancer serving all applications creates one large blast radius which can impact many applications with a single event.

The central load balancer acts as a single point of failure. If the central load balancer goes down, all the applications go down with it. It becomes the central location point that contains obsolete policies and rules. Unless the load balancer is well documented and understood, the rules continue to swell year after year, making administrative tasks a back-breaking mission.

Complexity is the number one enemy of security and networking. If the network architecture has not been fully automated from the design to provisioning phase, the organizations will always have one “network cowboy” who knows everything.

Efficient Per-App ADC Model

The per-app ADC model can be described as the process of breaking down or splitting a single large load balancer into a number of smaller and more manageable load balancers. The design limits the outage blast radius to certain network segments. The best part is that a failure in one per-app ADC does not impact another per-app ADC. Segmenting the network with per-app ADCs reduces the scope and limits the blast radius.

This makes the administration easy, as you only need to deal with configurations for a single application as opposed to all applications. The “network cowboy” becomes redundant, and the move to an efficient operational model can be made.

KEMP offers a metered license (MELA) consumption model based on total throughput, no matter how many instances are installed. For example, a 3-tier application model has the requirement for redundant load balancing services between each tier. Each of the load balancers acts in active – standby mode. This is different to a traditional licensing model charging per-instance even if there is no traffic going through the second instance.

If you need to go one step further and offer site-level resilience by deploying a disaster recovery (DR) site as a backup, the benefits of MELA are even greater. Administrators can install many load balancing instances on the DR site but are only billed when a failover event occurs and traffic passes through the backup site. This makes per-app ADC not only user-friendly but also cost-effective.


If administrators do not choose an optimum load balancing design and usage consumption model, costs can be incurred from a variety of sources. Reducing complexity with efficient per-app ADC design limits the damage and the associated costs.

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy