Aside from simplifying and resolving identity issues, load balancers can also be used for workload balancing and cloud bursting in a hybrid cloud environment. Global load balancing capability is also important for this architecture.
A typical scenario might be an ecommerce application running on premises, with a public cloud IaaS or PaaS service used either for disaster recovery purposes, for cloud bursting or just to distribute the workload, depending on the location of the user.
Such a configuration could be implemented by deploying global capable load balancers in front of the in-house application server farm and in the cloud as well.
Depending on how you configure your hybrid deployment, global load balancing would be used to direct incoming traffic to the most appropriate datacenter. There are three possible scenarios.
Disaster Recovery/Business Continuity
Load balancers direct all incoming traffic to the on-premises application server farm until global health checking detects a data center outage. At that point all requests are directed to the cloud application service.
Both the on-premises and cloud service application server farms are active in this case and location based load balancing directs traffic to the datacenter in closest proximity to the user. This could also be combined with health checking to direct traffic away from a datacenter or datacenter servers or applications suffering from an outage for any reason.
In a cloud bursting scenario, the load balancers are configured with KPI thresholds, such as server CPU load or application response time, that determine where to direct traffic. The Global load balancer directs all incoming traffic to on-premises servers during non-peak usage periods. Once configured KPI thresholds have been reached, the load balancers immediately directs traffic to the cloud based servers until traffic load returned to non-peak levels on premises. Another option would be to implement a VPN between the on-premise load balancer and the cloud based servers allowing the balancing of traffic to servers in the cloud should on-premise capacity be exhausted