default-focus-right

A vExpert Load Test of the Kemp VLM-5000

Start Free Trial
Home / White Papers / A vExpert Load Test of the Kemp VLM-5000

Introduction

Today more than ever people are moving to a virtualization model, even for network functions and services that previously were only deployed in physical hardware as traditional appliances. Because of this a number of technology vendors have embraced this trend to provide and are focusing on virtualized network and infrastructure appliances.

At VMworld Europe 2014 in Barcelona I stopped off at the Kemp Technologies booth where I was introduced to the LoadMaster family of virtual load balancers. I sometimes have to deal with designs and deployments where load balancers are used and was therefore interested to see what Kemp Technologies had to offer in the load balancer market.

Before I left the Kemp booth, I grabbed a brochure and made a note to download a trial version of one of the virtual LoadMaster appliances.

Kemp Virtual LoadMaster Appliances Overview

The Kemp virtual load balancers are called Virtual LoadMasters (VLM) and are virtual appliances shipped in a variety of formats to be easily deployed to your existing virtual infrastructure or cloud platform. Although this paper is based on an appliance for vSphere, Kemp Virtual LoadMaster appliances can be used in environments based on a range of hypervisors and cloud platforms, including:

  • VMware, Hyper-V, KVM & Oracle VirtualBox
  • VMware vCloud Air
  • Microsoft Azure
  • Amazon Web Services (AWS)

Additionally, all Kemp load balancers, whether physical or virtual, are shipped with the following key features:

Features

  • Layer 4/7 Load Balancing
  • Content Switching
  • Caching, compression Engine
  • MS Exchange 2010/2013 Optimized
  • Pre-configured virtual service templates
  • IPS Engine
  • High Availability
  • TMG Replacement Included
  • GSLB Multi-Site Load Balancing
  • RESTful API

Support Applications

  • Mware Horizon Suite
  • Virtual Desktop Infrastructure
  • CRM Systems
  • Apache Server
  • Oracle E-Business Suite
  • Oracle WebLogic
  • IBM WebSphere
  • Mobile Device Management
  • Unified Communications
  • Intranet Applications

Supported Microsoft Workloads

  • Exchange 2010 / 2013
  • Lync 2010 / 2013
  • Remote Desktop Services
  • SharePoint
  • Dynamics
  • Office Web Apps
  • Azure-Hosted Workloads
  • ADFS
  • IIS
  • Custom .Net Applications

Choosing a Virtual LoadMaster Model

Kemp offers four Virtual LoadMaster appliance models ranging from the entry level VLM-200 to the massive VLM-10G. I made a decision to not download the smallest appliance, as most of my customers would probably go for a medium sized appliance rather than an entry level appliance. After all, Kemp offers a 30-day unrestricted trial license for the appliance of your choice.

The following table details the different Virtual LoadMaster models that Kemp has to offer:

Virtual LoadMaster ModelVLM-200VLM-2000VLM-5000VLM-10G
Hypervisors & Cloud Platforms Supported    
VMware vCloud AirYesYesYesYes
Microsoft AzureYesYesYesYes
Amazon Web Services (AWS)YesYesYesYes
VMware, Hyper-V, KVM, Xen, Oracle VirtualBoxYesYesYesYes
Specifications    
Balancer Throughput LicenseUp to 200 MbpsUp to 2 GbpsUp to 5 GbpsUp to 10 Gbps
SSL TPS LicenseUp to 200Up to 1,000Up to 10,000Up to 12,000
Layer 4 concurrent connections3,000,0003,000,0003,000,0003,000,000
Max Servers Supported / Virtual Clusters1000/2561000/10001000/10001000/1000
Web Application Firewall Pack (AFP)YesYesYesYes
TMG Replacement (SSO, Pre-Authentication, Security Logging)YesYesYesYes
GSLB Multi-Site Load BalancingYesYesYesYes

Based on the specifications above, I opted to test the VLM-5000 for the purposes of this paper. It is capable of up to 5Gbps throughput, 12,000 SSL TPS and 3,000,000 Layer 4 concurrent connections.

A closer look at the VLM-5000

On the Kemp Technologies website, the VLM-5000 is introduced as:

The VLM-5000 is a small virtual appliance that runs a Linux distribution. Out of the box, it comes configured with 2x vCPUs, 1 GB of RAM, 2x VMXNET3 virtual network adapters and a single 16GB VMDK.

When thinking about a load balancer, most people probably default to thinking about traditional “Web Services”, (i.e. distributing traffic across multiple web servers). While the VLM-5000 does that and it does it pretty well as you will see later, even when I targeted it with a in house Denial of Service (DoS) attack, it managed to carry the load without breaking a sweat. However, what if your requirements are more demanding than simple HTTP/HTTPS load balancing? From a features point of view, the VLM-5000 boasts with quite a few other capabilities that can satisfy a variety of needs:

  • L7 Support for TCP and UDP protocols
  • SSL Termination/Offload
  • Layer 7 Content Switching
  • Server and Application Health Checking
  • Advanced, App-Transparent Caching Engine for HTTP/HTTPS protocols
  • Optimized Compression for Static and Dynamic HTTP/HTTPS Content
  • Layer 7 Intrusion Prevention System (IPS), SNORT-Rule (HTTP) Compatible
  • NAT-based forwarding
  • Support for Direct Server Return (DSR) configurations
  • Remote Desktop Services integration with built-in RD Session Reconnection functionality
  • Configurable S-NAT support
  • Web Application Firewalling
  • Single Sign-On and Pre-Authentication
  • GSLB Add-On Support
  • Kemp Web Application Firewall Pack (AFP) Engine

Obtaining a Trial

The download process is very simple – all you have to do is head over to the download website and select the format in which you would like to download the appliance, your geographic location, agree to the usage terms  and click the “Download” button. A new webpage will open where, in today’s average software download terms, the rather tiny 61MB file download will begin. It also provides instructions on how to import the appliance into your environment and how to activate your trial license.

Simple Configuration of the Appliance

After following the standard virtual appliance deployment procedure for vSphere, I simply had to browse to the appliance’s management IP address to begin working on configuring the appliance.

The first step was to apply the newly issued trial license that was emailed to me. The license can be applied in offline or online mode. For this paper, I opted to use online mode as it simply requires you to enter your Kemp ID (which is the email address that you used to register for the trial license), and password. It will apply the license that is available on your Kemp account to the appliance. Once the credentials and license have been accepted the appliance is ready for use.

To configure and use the appliance, we need to log into its web user interface using the following steps:

  1. Browse to the management appliance IP address in a supported browser
  2. Log in with the default credentials of:
    1. Username: bal
    2. Password: 1fourall
  3. Change the bal user’s password to something more secure

Once logged into the management web interface, I was able to quickly configure the following settings with ease:

  • Static IP Address and Subnet Mask for eth0
    • Changed via System Configuration -> Interfaces -> eth0
  • Hostname
    • Changed via System Configuration -> Local DNS Configuration -> Hostname Configuration
  • DNS Servers and search domain
    • Changed via System Configuration -> Local DNS Configuration -> DNS Configuration
  • Default Gateway
    • Changed via System Configuration -> Route Management -> Default Gateway
  • Date, Time, reference NTP server, and Time Zone
    • All configured via System Configuration -> System Administration -> Date/Time

With all this configured, the VLM-5000 is essentially configured and ready to place in front of application servers.

Load Testing the VLM-5000

In order to perform some basic tests, I needed multiple servers to actually distribute some traffic across as there’s no point in having a load balancer if you haven’t got anything behind it. One easy test is a simple web server deployment where multiple systems host identical content. The load balancer will then be responsible for balancing the client requests to each webserver in turn, based on the load balancing policies and rules.

For this test we configured 3 new Apache web servers running on CentOS 6.5 virtual machine on vSphere. We then configured the VLM-5000 with a virtual service that listening on a virtual IP address that forwards requests to each web server in the pool based on a round robin load balancing policy.

The following components were configured for this test:

Server/Service NameIP AddressPurpose/Description
LBWebService192.168.1.184Virtual IP address attached to a web load balancing service provided by the Kemp VLM-5000
LBWEB01192.168.1.181Apache WebServer (HTTP)
LBWEB02192.168.1.182Apache WebServer (HTTP)
LBWEB03192.168.1.183Apache WebServer (HTTP)
LBWEBCLIENT1 – LBWEBCLIENT 10Various DHCP Issued10x CentOS 6.5 servers to be used as clients, performing load generating tasks

Configuring the VLM-5000 with a Virtual Service

For a load balancer to work, it needs to make available an IP address and listen on that IP address for requests. That new IP address belongs to a virtual service which is configured to forward the requests that it receives at its virtual IP address on to “Real Servers” such as the 3 Apache Webservers in our environment. The type of traffic the virtual service listens for, forwards on, and responds to depends on the settings configured when it’s created. In our test, we needed to configure a virtual service to listen for HTTP/HTTPS requests and to forward the requests to the individual Apache web servers that actually process and respond to the clients. In our environment as is most often the case, responses are also configured to return via the virtual service.

To create a new virtual service for our load balancer to listen on, we navigate to Virtual Services -> Add New. This allows us to specify a virtual IP address (192.168.1.184) for the new virtual service as well as a port to listen on (80), an optional Service Name (LBWebService) and the protocol (TCP). With all the fields completed, clicking on the “Add this Virtual Service” button will then open up the next configuration page where we can configure some additional settings. At this point, we left all virtual service settings at their defaults just to see how little configuration is actually required to get a basic HTTP/HTTPS service successfully up and running.

Now that we have a virtual service up and running (simple testing completed by pinging 192.168.1.184) we need to add our 3 “Real” Apache web servers as members of  the virtual service. To add a real server we simply follow the steps below:

  1. On the VLM-5000 management user interface, click through Virtual Services -> View/Modify Services
  2. Click the “Modify” button of the appropriate virtual service and expand the “Real Servers” section.
  3. Under “Real Servers” click the “Add New…” Button
  4. Enter the Real Server Address of the 1st Apache Webserver (192.168.1.181)and the Port (80)
  5. Click “Add This Real Server
  6. Repeat these steps for each Real Server to be added

Round Robin Test

The virtual service is configured by default to load balance web requests using round robin. That means that if webserver 1 has just served up the last request, web server 2 is in line to serve the next request followed by server 3 and so on. To verify this behavior, I wrote a quick PHP script and saved it as index.php in each of the webserver’s webroot directory. The index.php file is included below for reference:

code

The simple PHP above  writes “This request was processed by: <WebServer Name>” to the web page returned and then performs a series of calculations to generate a little bit of CPU load on the webserver before returning a result with the number of calculations performed to get to the goal of less than 3. If the load balancer is working correctly in round robin mode, we should see a page being served up by a different web server in the sequence starting with LBWEB01 and ending with LBWEB03 before returning to LBWEB01 for the 4th page refresh. To avoid having to refresh the page manually the PHP file also contains content in the <head> section which redirects the page every 1 second to the current URL, effectively submitting a new request to the virtual service in 1 second intervals.

The screenshots below shows that even though the URL remains the same, the actual webserver that processed each request is different, which means that the round robin load balancer is working as expected:

img2 img3 img4

Figure 1 - Load Balanced Responses

Load Testing

In order to test the VLM-5000’s resilience a high volume of user traffic was also simulated. There are several tools available to achieve this, however for this test, we used JMeter to flood the VLM-5000’s virtual IP address with HTTP requests. JMeter is a Java based tool that can be used to generate load on a webserver with multiple users (threads). It then measures response times for the requests.

As simulating multiple users requesting multiple pages at a single point in time is quite taxing on the host machine running the service, I decided to launch JMeter from two different workstations. As part of the testing I configured JMeter to have anything from 100 users per instance to 10,000 users and issued 100,000 requests in 60 seconds. However, when generating such a heavy load, it was quickly discovered that pushing the load balancer and webservers to the point where they noticeably struggle is actually quite difficult. The bottleneck actually resulted on the client computers which struggled the most just generating the load. With this, for the heavier tests, the “concurrent users” in JMeter generating connections was throttled down to 500 per JMeter instance in order to maintain stability on the clients.

The PHP script used to generate load on the web servers was also modified to reduce the complexity of the mathematical problem that required solving each time the script ran. In the first instance of the script, the PHP webserver had to select a random number between 10,000 and 250,000 ($c = rand(10000,250000);) and then carry out calculations on that number to get below 3. However, it was noted that even with loading the web servers with 200 requests per second, all three web servers would simply max out their CPUs and things would grind to a hold. As a result I reduced the complexity to ($c = rand(100,2500);) which still takes a bit of CPU power per transaction, but doesn’t end up breaking the web servers when the load is increased.

Basic Webserver Load Test Results

After loading the VLM-5000 Virtual Service with various load sizes and requesting content from the web servers behind the VLM-5000, despite the client and server performance issues that had to be addressed, the load balancer just kept operating without a hitch. In fact the web servers behind the VLM-5000 went offline several times, as the as the traffic load was just too heavy to manage at times, but the load balancer stayed online with a maximum recorded CPU utilization of 28% when handling 1000 simulated users issuing a combined 69,000 requests for the PHP file to be processed every minute. While this was beyond the scope of the test, LoadMaster could also be easily configured to throttle requests in real-world scenarios.

The VLM-5000 management interface does provide real-time information as to what the current connection rate (connections per second) is along with the  distribution details of those connections. The following figure show that with JMeter running and issuing 889 HTTP requests per second to the VLM-5000, ach web server was evenly receiving 296 connections per second.

img5

After increasing the load generated by JMeter, the VLM-5000 was managing 1679 connections per second, whilst keeping the connections balanced between all web servers.

img6

For the load balancer to accept and forward requests, it requires CPU and memory resources. The figure below shows that with handling 1679 connections per second, the total CPU utilization on the VLM-5000 was only 16%.

img7

Conclusion

The Kemp VLM-5000 provides a great option for those looking to deploy a virtual application delivery solution. The key benefits are that despite being virtual it still provides all of the features available in mainstream hardware combined with the flexibility and scalability advantages of virtualization. The ongoing move into the software-defined cloud era will definitely continue to increase the proliferation of practical, scalable solutions like this.

About the author:

Rynardt Spies is a cloud, virtualization and automation consultant with 10 years of experience in the virtualization industry. His main focus today is on private and hybrid cloud infrastructures.

Rynardt is a frequent contributor to the VMware Technology Network (VMTN) and has been an active blogger on virtualization and other IT related topics since April 2008. For his contributions to the VMware virtualization community, he has been named a VMware vExpert for 2009, 2010, 2013 and 2014

Rynardt holds VCP (VMware Certified Professional) certifications for VMware Virtual Infrastructure 3, vSphere 4 and vSphere 5. He also holds both Administration as well as design VCAP (VMware Certified Advanced Professional) certifications on vSphere 4.

Rynardt maintains a virtualization focused blog at http://www.virtualvcp.com and is active on Twitter at @rynardtspies

Lernen Sie den Kemp LoadMaster noch heute kennen.


30 Tage Testversion Kontakt Vertrieb