The Kemp Ingress Controller provides a simple mechanism to automate end-point configuration by routing external traffic to your Kubernetes Cluster.
The Kemp LoadMaster will automatically provision the required virtual services and ingress policies via Kubernetes API and adapt to changes in your microservice configuration. It routes traffic directly to the Kubernetes Pods and allows microservices to be managed alongside traditional monolithic applications while utilizing advanced enterprise services, such as Web Application Firewall, Access Management, GEO, and L7 Service traffic management.
It represents the simplest, most robust, and scalable way to publish both Kubernetes microservices and monolithic applications side-by-side from a single device.
By operating outside of the Kubernetes cluster, the Kemp Ingress Controller enables efficient proxying of traffic into the Kubernetes Cluster (North-South Traffic) without unnecessary ‘double load balancing’. Traffic is steered directly to Pods via the appropriate Kubernetes node.
Utilizing Kubernetes API, Kemp Ingress Controller for Kubernetes automatically updates End Points to enable your application to scale up and down automatically without any manual configuration.
Kemp Ingress Controller includes two modes of operation that enable the correct amount of control to be maintained by Network operations teams. With Service Mode a Specific End point may be assigned to application development teams with necessary change control while with Ingress Mode, traditional Ingress functionality can be used.
Utilizing Kemp Ingress Controller enables a single load balancer to be used for proxying monolithic and microservice applications. Operating in Hybrid mode, an application may even be split between Kubernetes microservices and monolithic application servers enabling flexibility in the journey to microservice application delivery.
With Kemp LoadMaster supporting hardware, cloud and virtual appliances, Kemp ingress controller can be used whatever the Kubernetes deployment – bare metal, cloud or virtual. Validated supported platforms include
In Ingress Mode, the Kubernetes Ingress Object is used to define the ingress behavior which is delivered by the LoadMaster including Hostname and Path rules as well as advanced service options. In this mode, the Virtual Service is automatically created based on the information defined, with configuration added & removed dynamically in response to any Kubernetes updates. If a service scales, more Real Servers are added with no user input required.
In Service Mode, with the addition of just a few annotations to the service definition a pre-configured Load balancer End Point may be mapped to a Kubernetes Service. Once configured Real Servers are added and removed dynamically in response to any Kubernetes updates. This mode provides an easy way to allocate Virtual Services to Application Development teams in a controlled manner. Service mode also enables the creation of hybrid applications where defined paths may send traffic to Kubernetes or Monolithic Application Servers on the same Virtual Service.
Secure access to microservice applications by utilising Pre-Authentication and Single sign on.
Protect microservice applications and APIs against vulnerabilities based on application threat intelligence.
Optimise microservice applications using Intelligent Health Checking, SSL Acceleration, Rate Limiting, Caching & Compression.
Easy to deploy and configure with access to configuration templates and deployment guides to get up and running quickly.Start free trial