Today more than ever people are moving to a virtualization model, even for network functions and services that previously were only deployed in physical hardware as traditional appliances. Because of this a number of technology vendors have embraced this trend to provide and are focusing on virtualized network and infrastructure appliances.
At VMworld Europe 2014 in Barcelona I stopped off at the Kemp Technologies booth where I was introduced to the LoadMaster family of virtual load balancers. I sometimes have to deal with designs and deployments where load balancers are used and was therefore interested to see what Kemp Technologies had to offer in the load balancer market.
Before I left the Kemp booth, I grabbed a brochure and made a note to download a trial version of one of the virtual LoadMaster appliances.
The Kemp virtual load balancers are called Virtual LoadMasters (VLM) and are virtual appliances shipped in a variety of formats to be easily deployed to your existing virtual infrastructure or cloud platform. Although this paper is based on an appliance for vSphere, Kemp Virtual LoadMaster appliances can be used in environments based on a range of hypervisors and cloud platforms, including:
Kemp offers four Virtual LoadMaster appliance models ranging from the entry level VLM-200 to the massive VLM-10G. I made a decision to not download the smallest appliance, as most of my customers would probably go for a medium sized appliance rather than an entry level appliance. After all, Kemp offers a 30-day unrestricted trial license for the appliance of your choice.
The following table details the different Virtual LoadMaster models that Kemp has to offer:
Virtual LoadMaster Model | VLM-200 | VLM-2000 | VLM-5000 | VLM-10G |
---|---|---|---|---|
Hypervisors & Cloud Platforms Supported | ||||
VMware vCloud Air | Yes | Yes | Yes | Yes |
Microsoft Azure | Yes | Yes | Yes | Yes |
Amazon Web Services (AWS) | Yes | Yes | Yes | Yes |
VMware, Hyper-V, KVM, Xen, Oracle VirtualBox | Yes | Yes | Yes | Yes |
Specifications | ||||
Balancer Throughput License | Up to 200 Mbps | Up to 2 Gbps | Up to 5 Gbps | Up to 10 Gbps |
SSL TPS License | Up to 200 | Up to 1,000 | Up to 10,000 | Up to 12,000 |
Layer 4 concurrent connections | 3,000,000 | 3,000,000 | 3,000,000 | 3,000,000 |
Max Servers Supported / Virtual Clusters | 1000/256 | 1000/1000 | 1000/1000 | 1000/1000 |
Web Application Firewall Pack (AFP) | Yes | Yes | Yes | Yes |
TMG Replacement (SSO, Pre-Authentication, Security Logging) | Yes | Yes | Yes | Yes |
GSLB Multi-Site Load Balancing | Yes | Yes | Yes | Yes |
Based on the specifications above, I opted to test the VLM-5000 for the purposes of this paper. It is capable of up to 5Gbps throughput, 12,000 SSL TPS and 3,000,000 Layer 4 concurrent connections.
On the Kemp Technologies website, the VLM-5000 is introduced as:
The VLM-5000 is a small virtual appliance that runs a Linux distribution. Out of the box, it comes configured with 2x vCPUs, 1 GB of RAM, 2x VMXNET3 virtual network adapters and a single 16GB VMDK.
When thinking about a load balancer, most people probably default to thinking about traditional “Web Services”, (i.e. distributing traffic across multiple web servers). While the VLM-5000 does that and it does it pretty well as you will see later, even when I targeted it with a in house Denial of Service (DoS) attack, it managed to carry the load without breaking a sweat. However, what if your requirements are more demanding than simple HTTP/HTTPS load balancing? From a features point of view, the VLM-5000 boasts with quite a few other capabilities that can satisfy a variety of needs:
The download process is very simple – all you have to do is head over to the download website and select the format in which you would like to download the appliance, your geographic location, agree to the usage terms and click the “Download” button. A new webpage will open where, in today’s average software download terms, the rather tiny 61MB file download will begin. It also provides instructions on how to import the appliance into your environment and how to activate your trial license.
After following the standard virtual appliance deployment procedure for vSphere, I simply had to browse to the appliance’s management IP address to begin working on configuring the appliance.
The first step was to apply the newly issued trial license that was emailed to me. The license can be applied in offline or online mode. For this paper, I opted to use online mode as it simply requires you to enter your Progress ID (which is the email address that you used to register for the trial license), and password. It will apply the license that is available on your Kemp account to the appliance. Once the credentials and license have been accepted the appliance is ready for use.
To configure and use the appliance, we need to log into its web user interface using the following steps:
Once logged into the management web interface, I was able to quickly configure the following settings with ease:
With all this configured, the VLM-5000 is essentially configured and ready to place in front of application servers.
In order to perform some basic tests, I needed multiple servers to actually distribute some traffic across as there’s no point in having a load balancer if you haven’t got anything behind it. One easy test is a simple web server deployment where multiple systems host identical content. The load balancer will then be responsible for balancing the client requests to each webserver in turn, based on the load balancing policies and rules.
For this test we configured 3 new Apache web servers running on CentOS 6.5 virtual machine on vSphere. We then configured the VLM-5000 with a virtual service that listening on a virtual IP address that forwards requests to each web server in the pool based on a round robin load balancing policy.
The following components were configured for this test:
Server/Service Name | IP Address | Purpose/Description |
---|---|---|
LBWebService | 192.168.1.184 | Virtual IP address attached to a web load balancing service provided by the Kemp VLM-5000 |
LBWEB01 | 192.168.1.181 | Apache WebServer (HTTP) |
LBWEB02 | 192.168.1.182 | Apache WebServer (HTTP) |
LBWEB03 | 192.168.1.183 | Apache WebServer (HTTP) |
LBWEBCLIENT1 – LBWEBCLIENT 10 | Various DHCP Issued | 10x CentOS 6.5 servers to be used as clients, performing load generating tasks |
For a load balancer to work, it needs to make available an IP address and listen on that IP address for requests. That new IP address belongs to a virtual service which is configured to forward the requests that it receives at its virtual IP address on to “Real Servers” such as the 3 Apache Webservers in our environment. The type of traffic the virtual service listens for, forwards on, and responds to depends on the settings configured when it’s created. In our test, we needed to configure a virtual service to listen for HTTP/HTTPS requests and to forward the requests to the individual Apache web servers that actually process and respond to the clients. In our environment as is most often the case, responses are also configured to return via the virtual service.
To create a new virtual service for our load balancer to listen on, we navigate to Virtual Services -> Add New. This allows us to specify a virtual IP address (192.168.1.184) for the new virtual service as well as a port to listen on (80), an optional Service Name (LBWebService) and the protocol (TCP). With all the fields completed, clicking on the “Add this Virtual Service” button will then open up the next configuration page where we can configure some additional settings. At this point, we left all virtual service settings at their defaults just to see how little configuration is actually required to get a basic HTTP/HTTPS service successfully up and running.
Now that we have a virtual service up and running (simple testing completed by pinging 192.168.1.184) we need to add our 3 “Real” Apache web servers as members of the virtual service. To add a real server we simply follow the steps below:
The virtual service is configured by default to load balance web requests using round robin. That means that if webserver 1 has just served up the last request, web server 2 is in line to serve the next request followed by server 3 and so on. To verify this behavior, I wrote a quick PHP script and saved it as index.php in each of the webserver’s webroot directory. The index.php file is included below for reference:
The simple PHP above writes “This request was processed by: <WebServer Name>” to the web page returned and then performs a series of calculations to generate a little bit of CPU load on the webserver before returning a result with the number of calculations performed to get to the goal of less than 3. If the load balancer is working correctly in round robin mode, we should see a page being served up by a different web server in the sequence starting with LBWEB01 and ending with LBWEB03 before returning to LBWEB01 for the 4th page refresh. To avoid having to refresh the page manually the PHP file also contains content in the <head> section which redirects the page every 1 second to the current URL, effectively submitting a new request to the virtual service in 1 second intervals.
The screenshots below shows that even though the URL remains the same, the actual webserver that processed each request is different, which means that the round robin load balancer is working as expected:
Figure 1 - Load Balanced Responses
In order to test the VLM-5000’s resilience a high volume of user traffic was also simulated. There are several tools available to achieve this, however for this test, we used JMeter to flood the VLM-5000’s virtual IP address with HTTP requests. JMeter is a Java based tool that can be used to generate load on a webserver with multiple users (threads). It then measures response times for the requests.
As simulating multiple users requesting multiple pages at a single point in time is quite taxing on the host machine running the service, I decided to launch JMeter from two different workstations. As part of the testing I configured JMeter to have anything from 100 users per instance to 10,000 users and issued 100,000 requests in 60 seconds. However, when generating such a heavy load, it was quickly discovered that pushing the load balancer and webservers to the point where they noticeably struggle is actually quite difficult. The bottleneck actually resulted on the client computers which struggled the most just generating the load. With this, for the heavier tests, the “concurrent users” in JMeter generating connections was throttled down to 500 per JMeter instance in order to maintain stability on the clients.
The PHP script used to generate load on the web servers was also modified to reduce the complexity of the mathematical problem that required solving each time the script ran. In the first instance of the script, the PHP webserver had to select a random number between 10,000 and 250,000 ($c = rand(10000,250000);) and then carry out calculations on that number to get below 3. However, it was noted that even with loading the web servers with 200 requests per second, all three web servers would simply max out their CPUs and things would grind to a hold. As a result I reduced the complexity to ($c = rand(100,2500);) which still takes a bit of CPU power per transaction, but doesn’t end up breaking the web servers when the load is increased.
After loading the VLM-5000 Virtual Service with various load sizes and requesting content from the web servers behind the VLM-5000, despite the client and server performance issues that had to be addressed, the load balancer just kept operating without a hitch. In fact the web servers behind the VLM-5000 went offline several times, as the as the traffic load was just too heavy to manage at times, but the load balancer stayed online with a maximum recorded CPU utilization of 28% when handling 1000 simulated users issuing a combined 69,000 requests for the PHP file to be processed every minute. While this was beyond the scope of the test, LoadMaster could also be easily configured to throttle requests in real-world scenarios.
The VLM-5000 management interface does provide real-time information as to what the current connection rate (connections per second) is along with the distribution details of those connections. The following figure show that with JMeter running and issuing 889 HTTP requests per second to the VLM-5000, ach web server was evenly receiving 296 connections per second.
After increasing the load generated by JMeter, the VLM-5000 was managing 1679 connections per second, whilst keeping the connections balanced between all web servers.
For the load balancer to accept and forward requests, it requires CPU and memory resources. The figure below shows that with handling 1679 connections per second, the total CPU utilization on the VLM-5000 was only 16%.
The Kemp VLM-5000 provides a great option for those looking to deploy a virtual application delivery solution. The key benefits are that despite being virtual it still provides all of the features available in mainstream hardware combined with the flexibility and scalability advantages of virtualization. The ongoing move into the software-defined cloud era will definitely continue to increase the proliferation of practical, scalable solutions like this.
Rynardt Spies is a cloud, virtualization and automation consultant with 10 years of experience in the virtualization industry. His main focus today is on private and hybrid cloud infrastructures.
Rynardt is a frequent contributor to the VMware Technology Network (VMTN) and has been an active blogger on virtualization and other IT related topics since April 2008. For his contributions to the VMware virtualization community, he has been named a VMware vExpert for 2009, 2010, 2013 and 2014
Rynardt holds VCP (VMware Certified Professional) certifications for VMware Virtual Infrastructure 3, vSphere 4 and vSphere 5. He also holds both Administration as well as design VCAP (VMware Certified Advanced Professional) certifications on vSphere 4.
Rynardt maintains a virtualization focused blog at http://www.virtualvcp.com and is active on Twitter at @rynardtspies