Kemp Technologies Blogs

The Kemp Ingress Controller Explained

Kemp Technologies | Posted on | ADC | Load Balancer

Introduction

In our previous blogs we explained Kubernetes, the challenges it brings for Network administrators and described the different mechanisms for exposing Kubernetes services for user access. It is clear there are a number of options available. In this blog, we will take a closer look at the Ingress Controller component and some choices when implementing.

Ingress Resource v Ingress Controller

First of all, when talking about Ingress it’s important to distinguish between the Ingress Resource and Ingress Controller. The Ingress resource defines how traffic should be handled as it enters Kubernetes whereas the Ingress Controller is what performs the actions as defined by this resource. The below diagram illustrates the two components.

Ingress Resource

Here is an example of an Ingress Resource defined in .yaml format:

# kemp-ingress.yaml
 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: kemp-ingress
    annotations:
        "kubernetes.io/ingress.class": "kempLB"
        "kemp.ax/vsip": "10.151.0.234"
        "kemp.ax/vsport": "80"
        "kemp.ax/vsprot": "tcp"
      "kemp.ax/vsname": "InboundKubernetesVs"
spec:
    rules:
    - host: guest.kemp.ax
      http:
        paths:
          - path: /guestinput
            backend:
              serviceName: guestfront
              servicePort: 80
    - host: vote.kemp.ax
      http:
        paths:
          - path: /voteinput
            backend:
              serviceName: votefront
              servicePort: 80

Those familiar with Kubernetes will notice that the definition of an Ingress resource is similar to a Kubernetes Serviceand includes Metadata and Spec sections. Where it differs is that the Ingress includes rules defining how traffic should be handled entering Kubernetes. In this example traffic matching guest.kemp.ax/guestinput should be sent to the guestfront service and traffic matching vote.kemp.ax/voteinput should be sent to the votefront service.

It’s important to realise that creating an Ingress Resource on its own will not result in any difference in traffic handling. For this to be realised an Ingress Controller must exist that actually implements what is defined.

Ingress Controller

The Ingress Controller’s job is to put what is defined in the Ingress Resource into action. This is achieved by examining incoming traffic and based on the rules defined, manage requests appropriately (e.g. distributing to the correct pod(s)). To ensure application access is optimised it should provide benefits such as persistence and scheduling. As well as traffic distribution an Ingress Controller may perform advanced functions such as SSL/TLS offloading, Web Application Firewall, Access Management, Caching, Rate Limiting to name just a few. 

For those familiar with load balancing of traditional “monolithic” applications this will sound familiar. A load balancer distributes traffic to “Real Servers” whereas an Ingress Controller distributes amongst Service Pods. Where an ingress controller differs is in how intelligence is built in to ensure traffic is always routed to an available end point. In monolithic load balancing the destination servers are relatively static and health checking ensures non-performing Servers are taken out of rotation. As Ingress Controller is defined within the orchestrator (Kubernetes) it works somewhat differently in that Kubernetes will orchestrate the service pods (delete and recreate where required) and the Ingress Controller will  adapt to send traffic to whatever pods are present.

So while traditional Load Balancing and Ingress are similar they differ in how changes in application end points are handled. Kemp Ingress Controller enables both modes of operation, providing all the advanced services of LoadMaster along with the dynamic Ingress Controller functionality.

Containerised Ingress Controller?

Depending on preferences, the ingress Controller may be implemented as a collection of containers manged by Kubernetes or as an external resource outside of Kubernetes. Let’s take a look at these two options.

Containerised Ingress Controller

Running containerised Ingress Controller instances brings with it all the benefits of containerisation enabling deployment across multiple containers in a Kubernetes Cluster (typically exposed as a single Nodeport service).

With an Ingress controller available on multiple Kubernetes nodes, it is common for an external Load Balancer to be used to distribute traffic across these in a dual-tier manner. Some ingress controllers will include functionality to update this external load balancer automatically in response to any changes in the Ingress Controller Containers. This is sometimes referred to as the “External Load Balancer” as it lives outside of Kubernetes.

One potential issue with a dual-tier approach is that it may result in unnecessary “double load balancing”. Do we really need to load balance across the nodes, only for a containerised Ingress controller to then distribute across Pods within the Cluster? As can be illustrated this can lead to unnecessary cross node traffic (commonly referred to East-West traffic)

Another issue that should be considered is that for high traffic volumes, intensive computing resources may be needed for tasks such as SSL/TLS offloading, and these may be better served on a dedicated ‘kernel space’ device (external) rather as a containerised user space application that may not have the same performance capabilities. 

Non Containerised/External Ingress

With Kemp Ingress Controller, the approach taken is to perform the External Load Balancer and Ingress controller role in one providing a neat solution. Once correct routes have been configured to pod networks (if required), this allows efficient traffic processing and also reduces the number of entities that need management. This has the added benefit that a single LoadMaster may be used to manage both microservices and monolithic service endpoints – imperative when it comes to smooth migration of applications to a microservice architecture.

Service Mode

One final challenge to mention with Ingress management is the allocation of end points to separate teams. For a dev-ops organisation the concept of Ingress controller is easy to implement since a single team is responsible for the operation as well as development of an application. In other organisation structures this is not the case. A Network Operations Team may need strict control of network end points and ideally would be able to assign dedicated end points to teams for Kubernetes applications without delegating full control. Kemp Ingress Controller adds ‘Service Mode’ option for efficient delegation of end points for Kubernetes applications. (This is alongside normal “Ingress Controller” operation of ‘Ingress Mode” )

Service mode enables a dedicated LoadMaster Virtual Service ID to be assigned to a specific Kubernetes Service using annotations and any scale up/scale down is automatically applied to the Virtual Service.

Summary

The Ingress controller is an important component for delivery of Microservice applications. Ingress Controllers perform many of the functions of a traditional Load Balancer with automatic adaptation to changes in destination pods as orchestrated by Kubernetes. With Kemp Ingress Controller the LoadMaster can be utilised to perform the Ingress Controller role. This enables efficient traffic routing while enabling flexible deployment options alongside monolithic applications. 

In the next blog we will look at how both monolithic and microservice applications can be managed together with Kemp Ingress Controller on LoadMaster.

To take full advantage of Kemp Ingress Controller functionality see here

The Ingress Controller Series