We held a technical live demo about reverse proxy and API automation with LoadMaster. Renard Schöpfel, a Principal Pre-Sales Engineer at Progress, delivered the session, which lasted just over an hour. A recording is linked below for on-demand viewing.
Understanding Reverse Proxies
A reverse proxy acts as an intermediary between clients and backend servers running applications. Unlike traditional proxies that hide clients from servers, reverse proxies conceal servers from clients while distributing incoming requests across multiple backend application servers.
The main value of reverse proxies come from four key benefits: availability, scalability, performance and security. When clients connect to your reverse proxy instead of directly to your servers, you gain the ability to route traffic only to healthy servers, scale resources as required and improve the protection of your backend infrastructure against direct exposure and attacks.
Health Checks: The Foundation of Reliable Load Balancing
During the live demo, Richard highlighted the essential rule that systems admins should perform health checks as close to their application servers as possible. Many organizations make the mistake of relying on basic TCP port checks, which only confirm that a port is open. This often leads to false positives when the port responds, even when the underlying application cannot handle incoming access requests.
Instead, it's better to set up server health checks to reflect actual client requests. If your clients send GET requests to specific URLs, your health check should do the same. This way, traffic goes only to servers capable of handling real application requests, not just those responding to simple connectivity tests.
Beyond Basic Load Balancing: Application Delivery Controllers
Richard also discussed the transition from simple load balancers to Application Delivery Controllers (ADCs) and why this shift marked a significant improvement in infrastructure management. ADCs merge traditional reverse proxy functions with enhanced security features, such as Web Application Firewalls (WAF), pre-authentication systems and TLS/SSL offloading capabilities.
The WAF component offers real-time threat protection by analyzing all incoming requests against core rule sets designed to address OWASP Top 10 risks and other threats. When the WAF identifies suspicious activity, it blocks the request before it reaches backend servers. This protection is beneficial for organizations that need to comply with industry and government standards and regulations.
Global Server Load Balancing: Thinking Beyond Single Data Centers
Many businesses and other organizations operate across borders and global regions. Load balancing delivers performance and availability within a data center or cloud region. However, when operating beyond the reach of a single server location, it is still essential to provide the best application experience, regardless of the users' location at any given time. Global Server Load Balancing (GSLB) extends the benefits of load balancing across multiple cloud providers and data centers.
GSLB operates through DNS resolution, performing health checks before returning DNS A records to clients. This approach to GSLB enables several strategic advantages: disaster recovery capabilities, geographic load distribution and hybrid cloud deployments.
Organizations can maintain primary operations in their on-premises data center while automatically failing over to private or public cloud resources when necessary. The live event demonstrated geo-steering capabilities, where DNS responses vary based on a client's geographic location. Allowing European users to automatically connect to European servers, while North American users connect to local infrastructure. Improving performance and the user experience for everyone.
Certificate Management: Automating a Growing Challenge
Certificate lifecycle management continues to challenge IT teams as validity periods shrink. Decisions made by the Certificate Authority/Browser Forum in early 2025 establish a phased approach to dramatically reduce TLS/SSL certificate lifespans to 47 days by March 2029.
Manual certificate management is challenging at this rotation timescale, and impossible for organizations with many certificates in use. The ultimate goal is to automate all certificate commissioning and renewal processes using a protocol like the Automated Certificate Management Environment (ACME). We'll be publishing a separate blog on the changes soon.
Reverse proxies can help simplify the challenge by centralizing certificate management. When certificates terminate at the load balancer, backend servers require minimal certificate maintenance. A demonstration in the live event shows how organizations can implement automated certificate renewal through API calls, reducing operational overhead caused by certificate expiration incidents.
API Automation: Programmable Infrastructure Management
Richard explained that everything systems admins can do through the LoadMaster UI, they can also do via API calls. In some instances, the API calls offer extra features. The live event showed two API call methods: traditional GET requests with parameters and newer POST requests with JSON payloads.
Common API use cases discussed in the session include monitoring system health, deploying configuration changes and automating routine maintenance tasks. The demonstrations showed how to create virtual services, modify configurations, add backend servers and install certificates through programmatic interfaces. The API automation also enables firmware management, backup procedures and system reboots. Organizations can incorporate these features into their current automation setups using PowerShell modules, Python libraries or tools like Ansible and Terraform.
For monitoring integration, APIs provide access to detailed statistics, including CPU utilization, memory consumption, connection counts and service health status. This data allows organizations to create robust monitoring dashboards without relying solely on SNMP protocols.
Practical Implementation Insights
The live event featured several practical demonstrations that highlight key implementation details. Creating a virtual service involves specifying the protocol, port and scheduling method. Multiple scheduling options are available, including round-robin, least connections and response time-based distribution. The response time scheduling method is particularly interesting because it directs new requests to the server that responds quickest. This approach assumes that slower response times mean higher server load, automatically directing traffic toward less busy servers.
Security Enhancement Through Layered Protection
Modern reverse proxies like the LoadMaster solution implement multiple security layers. The LoadMaster Edge Security Pack (ESP) provides pre-authentication capabilities, collecting user credentials and validating them against identity providers before allowing application access. It also has many additional features that organizations traditionally implemented using the now-retired Microsoft Threat Management Gateway (TMG). Meaning LoadMaster with ESP can be a supported replacement for the now unsupported Microsoft TMG.
Richard outlined how rate limiting on a LoadMaster solution helps protect against denial-of-service attacks by restricting request volumes from individual sources. Plus, IP address filtering blocks known malicious sources and CAPTCHA integration helps prevent automated bot traffic. These features combine to protect against various common attack methods.
Organizations can use zero-trust network access to implement granular access controls with users authenticating through multiple factors and the system granting access only to specific resources based on user groups, source networks and security clearance levels.
Integration with Modern Development Practices
No modern IT solution can stand alone. They must integrate with other security, application delivery and IT infrastructure. The live event outlines how the API-driven infrastructure management in LoadMaster aligns perfectly with DevSecOps and Infrastructure as Code practices. Teams can version control their load balancer configurations, implement automated testing and deploy changes through traditional CI/CD pipelines.
Richard highlighted GitHub repositories containing PowerShell modules, Terraform providers and other integration tools. These techniques and tools enable organizations to treat their load balancers and reverse proxies as infrastructure as code, bringing the same version control and automation benefits enjoyed by application development teams.
Emerging Trends
Richard highlighted several overarching trends during the live event:
Certificate automation will become increasingly important as validity periods continue to shorten over the next few years. Organizations should review their current certificate management processes and find opportunities to implement API-driven automation.
Microservices adoption will increase the need for in-depth load balancing solutions. Organizations can maintain more seamless failover capabilities across cloud providers and on-premises infrastructure.
Multi-cloud and hybrid deployments will boost the adoption of global load balancing solutions. Organizations need seamless failover capabilities across various cloud providers and on-premises infrastructure.
Convergence of security and networking functions will continue accelerating. Future reverse proxy solutions will likely incorporate even more security features, possibly replacing dedicated security appliances in many setups.
Knowledge of emerging technologies and implementation patterns will help organizations build more resilient, scalable and secure application infrastructures. Combining reverse proxy capabilities with comprehensive API automation creates powerful opportunities for operational efficiency and system reliability.
If you're curious to know more about LoadMaster reverse proxy and global server load balancing capabilities, let's discuss how the LoadMaster solution can help you.