Edge Computing and Load Balancing

Posted on

We are at an inflexion point in the IT and network industries. Cloud application deployment upended the previous data center centric deployment model, with applications and data being centralized on public and private cloud infrastructure. Now the expansion in connected devices, sensors, and other Internet of Things (IoT) equipment, plus the expanding rollout of 5G mobile networks, is going to impact on this cloud deployment model. Not to replace it, but to augment the cloud model and add a more distributed model known as edge computing.

The definition of edge computing is still emerging as this is a rapidly growing and maturing area. However, the basic tenet of edge computing is to push discrete amounts of compute, memory, and storage out from the central cloud-based data centers to a point as close as possible on the network to where data is being generated. This makes sense as the amount of data that is being created by IoT, and associated technology is enormous. Transmitting this data back for processing in the cloud is both expensive in terms of the power required and in the bandwidth it uses. Edge computing aims to process the data generated locally at the source, and only transmit the results and essential data back to the center in the cloud.

Edge computing is going to be a significant driver for network evolution over the next five years. Spending on edge computing is forecast to be $9 billion per year by 2024, with 44% of that from North America, 28% from Europe, and 21% from the APAC region. The rollout of 5G will drive a lot of this expansion as edge computing technologies are essential in delivering the local high bandwidth, low latency network provision that 5G requires.

So, we are going to see a shift in how networks delivering applications and services are configured. This begs the question: Does the trend towards edge computing impact how load balancing technology is applied?

Edge Computing Infrastructure

One of the emerging trends in the delivery of edge computing is Edge Data Centers. These are small, local, and highly focused data centers that are located out towards the edge of the network. They are used to aggregate the data from IoT and other devices in a local area, process it, and send the results on to other larger edge data centers, and ultimately to the cloud-hosted central systems. Edge data centers are analogous to electricity supply sub-stations that push out the power supply provision as close to those consuming it as possible.

These edge data centers often contain industry-standard hardware that can run virtual machines and container technology, like Docker, that allows them to provide different applications and services dynamically based on use and particular needs at the time. In essence, they are small local data centers. Multiple edge data centers can be grouped as virtual edge data centers and treated as a single entity. This might be useful in a city section or suburb.

Anyone accustomed to load balancing and application delivery is probably noticing something familiar about this high-level description of edge data centers. They are ripe for proper load balancing to manage and shape data traffic coming in from devices.

Load Balancing Edge Computing Infrastructure

Given the high-level outline of edge computing above, it can be seen that there is still a place to manage the traffic flowing to the virtual machines and container-based applications deployed in edge data centers. In the IoT and 5G world we are entering, there must be continuous service and as low latency as possible in responses. This is precisely what load balancers help deliver. Each edge data center should have multiple application instances running, and traffic to them can be managed via load balancers that ensure that the load is spread out and that servers do not become overloaded.

The load balancers in the edge data center can also use Global Server Load Balancing (GSLB) to work with peers in other edge data centers, and in the cloud, to manage and spread the load over multiple sites in a region as required to maintain the best response times.

Conclusion

So what’s the answer to that question: Does the trend towards edge computing impact how load balancing technology is applied?

Yes, the trend towards edge computing will impact on how load balancing will be applied. The load balancers will need to migrate out towards the edge along with the other applications. If anything the requirements for as low a latency as possible for certain 5G services at the edge means that efficient load balancing away from the center will be more critical.

Posted on

Kemp Technologies

Kemp Technologies