Basic Zero Trust Principles for Application Security

Posted on

Zero-Trust is steadily becoming an emerging cyber buzz word that’s capturing the interest of organizations seeking to address modern security challenges.

While sometimes viewed as just a replacement of existing defense-in-depth capabilities, in reality, zero trust is a security posture augmentation with a stronger focus on identity, segmentation and controlling access from entities, whether they are part of your “trusted” environment or not. The real goal of this model is to eliminate trust in a system because trust within a network is a vulnerability that gets exploited and trying to get systems to be truly trusted entails integrating into a broken model.

In general, a zero trust approach for providing secure access to applications involves the following attributes:

Authentication before access

Foundationally, authentication and identity validation must be validated prior to showing requesting entities where protected assets are. This involves architecting the environment in a way so that clients – including internal ones – do not have direct path to application or authentication services to reduce attack surface and reduce the chances of these elements from being compromised. Proxying of both authentication and application access with policy control covering the circumstances of when access is allowed are also key for starting on a journey towards zero trust. 

Least privileged access model

In today’s world of BYOD and remote work it’s not always possible or practical to limit application access to organizationally controlled devices where strong end-point protection can be enforced. At a minimum, policy based on some level of device state or location validation should be leveraged for controlling device connectivity. The next step is to determine what a connecting entity is attempting to accomplish, the type of services that are being accessed and the needed communication protocols. Once determined, it can be validated whether or not, the access should be allowed or not based on current circumstances. Even if access is granted, it should be provisioned in a way that only allows the needed communication flows and nothing else.

Segmentation

The source location where incoming application requests are coming from should be brought into the calculus of level and type of access that’s granted. Even in the case of internal client communication, different network segments may come with different levels of clearance and security zone definition. The proxy architecture protecting your applications should enable detection of this segmentation and make decisions based not only on identity, and the context of the request but also the location where the request is coming from.

Ongoing Verification / Monitoring

Because things can change, once trusted isn’t always trusted. Even after application access is provided to a given user from a given location on a given device, ongoing monitoring of communication and state is required so that if the risk level changes, connectivity can be terminated to minimize risk. As an example, if coming from a public source, has threat intelligence sources updated the address to be known for suspicious activities, have there been attempts within the session that appear similar to L7 application attack patterns, payloads where there shouldn’t be any such as in ICMP traffic or other anomalous behavior. By connecting the proxy and access infrastructure to other elements in the environment such as your network visibility stack, application firewall and other security monitoring, it’s possible to instrument a framework that helps prevent exploitation of initially legitimately established client sessions.

Beyond these core principles, encryption of data in motion and at rest regardless of whether internally or externally accessed is another best practice. Just because client traffic is internal doesn’t mean that there should be trust by default – after all, many adversarial exploits rely on first appearing to be an internal resource before launching destructive behavior.

Lastly, a framework that enables monitoring from a network vantage point is key to detecting anomalies that are symptoms of insecure vectors within the architecture and contributes to an overall sound security posture. Perimeter security provides coverage for traffic that vertically crosses the borders of your environment and endpoint security gives deep and narrow context, however the network is the source of truth and a zero-trust model should be combined with real time visibility on how network access and packet flow is connecting to application layer communication. When further extended to provide automated response to suspect behavior, you will have a competitive edge over threat actors.

Posted on

Mike Bomba

Mike Bomba has worked within the Department of Defense for over 35 years. He is currently KEMP Technologies Federal Solutions Architect. Mike has held various leadership roles over a 35 year career in the Department of Defense including; Chief of Integration, Director of Projects, Plans and Architecture, Director of Projects and Engineering, Director, Operational Engineering Directorate, U.S. Army Network Enterprise Technology Command and 6 years as an officer in the U.S. Army signal community. Immediately prior to Mike joining KEMP, he served as Riverbed Technology's Senior Solutions Architect for Department of Defense (DoD).