Organizations execute their missions based on available resources. Availability is critical whether it is facilities, personnel, processes, or technologies.
For you to be able to depend on your information technology, the IT infrastructure must work at a performance level that meets your operational needs as well as with sufficient resiliency to ensure single points of failure are eliminated. Solutions need to continue to operate in the event of natural or man-made disasters. Common approaches to address resiliency are to implement many copies of critical application environments across different geographies. The challenge with this approach is that an application can have many failure modes. It can be compromised, it can misbehave, it can become overloaded, the hosting platform can crash, the network can degrade or fail, dependencies like authentication could become degraded or unavailable, and many more.
You continuously spend money to add datacenters, servers, storage, and networks. Since applications can be down for any number of reasons, it quickly becomes apparent that there is a need for something to check the applications and intercede on behalf of the user to ensure users (or consumers) of application services always received an optimal Application Experience (AX).
One key technology that addresses AX is the application load balancer. In the beginning this technology was designed to check applications to ensure they were functioning, schedule users to spread the load across all working application instances and ensure that if a user was temporarily disconnected from an application instance, they could reconnect to the same exact instance (e.g. shopping cart based services). Application Load Balancers matured to address other failure scenarios. Global Server load balancing (GSLB) was added to address regional network or data center failures, Web Application Firewalls (WAF) was added to address cyberattacks, authentication proxies were added to ensure application servers could always reach authentication providers, and more elegant scheduling methods were added that could collect performance data from the network or the applications themselves to ensure new users were connected to the location providing the fastest possible response times. All together these additional capabilities came to be known as Application Delivery Controllers (ADC).
Today ADCs are a critical component in most information system architectures. If the application must be available and the user (consumer) of application servers demands the best possible AX, then there will be ADCs in the path from the user to the application instances.
How does one add or maintain ADCs in such a manner that it does not become a huge financial liability or manpower drain? Many of the enterprise class ADC vendors are considered legacy manufacturers, not because their products are not still being manufactured, but because their focus is based on providing a hardware-based solution that requires significant support and recurring replacements to stay current. The next generation (next gen) ADC manufacturers take a different approach. They build products based on ease of use through a flexible software framework designed to work in the current virtualized architectures and DevOps-based application development models. They measure their success not just in dollars, but more importantly, happy customers.
Next gen ADC manufacturers create products that are inherently easy-to-use while still delivering all the enterprise class features the market demands. They know they are working in a world dominated by legacy ADC manufacturers and design solutions that can coexist with legacy ADCs allowing you to mix and match easy-to-use solutions where you can and only implement hard-to-use solutions where you have to.
Since next gen ADC vendors started natively as software solutions, they do not have the same hardware dependencies and restrictions that the legacy ADC manufacturers have. Their solutions are designed to run on commodity hardware, hypervisors and cloud services. The number of lines of code they have to support smaller than the legacy ADC vendors by focusing on the core load balancing functionality. They developed in a cyber hostile environment and built security into the core of their products, instead of bolting security on to existing platforms.
Legacy ADC (and other legacy network technology manufacturers) such as F5 or Citrix add proprietary standards or features into their products to add value, but these custom functions ensure you cannot easily leave them. This is vendor lock-in and this means you cannot select the most effective and efficient solution to your operational needs. You are stuck with the vendor proprietary solution which is often a more expensive and complex solution than the available alternatives. next gen ADC manufacturers address open standards as much as possible. They believe building an easier and more affordable solution, while delivering exceptional post-sales support, will ensure their success as well as the success of their customers.
How do you break the dependency cycle, reduce costs and complexity, and make your IT life easier? Start by bringing in next gen ADC products into your configuration, change and release management labs. Validate that these solutions can meet all or a major part of your operational needs. When new requirements result in purchase of additional ADCs, check if these next gen solutions can meet your needs. Look at specific computing architecture changes like physical to virtual, or virtual to cloud and use next gen ADC solutions in these newer computing environments. When you need to replace legacy ADCs, consider next gen solutions first. Where you can, start to break your dependencies with proprietary protocols and services to make it easier to replace hard and expensive ADC with easy and affordable solutions. EMBRACE CHANGE. Change will keep you and your business relevant in a world that is constantly changing around you.