default-focus-right

Securely Publishing Non-Cloud Native Apps in AWS

Episode Overview

More than ever, organizations of all sizes are investing in cloud architecture for improved time to market of services, ability to provide just in time delivery for lines of business, scalability and security benefits. This involves changing practices, processes and mindset. In reality, cloud is less of a destination and more of an operating model. That said, the hyper-scalers such as Microsoft, Amazon and Google have provided the infrastructure and tooling that enables organizations to expedite the transition to this operating model. Today, we’ll talk in particular about AWS, some of the challenges of publishing packaged Enterprise applications – that is that were not built as cloud native and potentially were previously deployed on premises – and how intelligent Layer 7 load balancers can help these types of projects succeed. We’ll deep dive into the following key areas Networking and Blast Radius Scalability and Automation Security

Jason Dover photo
JASON DOVER

VP, Product Strategy, Kemp

@jaysdover

July 29, 2021

Andy Redmond:
... and so that the net result with automating horizontal scale is a significant reduction in what's been turned blast radius during failure events. In the legacy model of scaling up the load balancer and configuring perhaps hundreds of services on an HA pair, well, when an issue occurs, the radius of impact can potentially be massive.

Jason Dover:
My name is Jason Dover and I'm vice president of product strategy at Kemp Technologies. Welcome to this installment of the Application Experience Insight Series. Today, we're going to be tackling the topic of securely publishing non-cloud native applications in AWS. More than ever organizations of all sizes are investing in cloud architecture for improved time to market of services, ability to provide just in time delivery for lines of business, scalability and security benefits.
This involves changing practices, processes, and mindset, and reality cloud is less of a destination and more of an operating model. That said, the hyper scalers such as Microsoft, AWS, and Google have provided the infrastructure and tooling that enables organizations to expedite the transition to this operating model and to experience the benefits.
Today, we'll talk in particular about AWS, some of the challenges of publishing packaged enterprise applications in that environment. And that's to say apps that were not built as cloud native from the beginning and potentially were previously deployed on premises. We'll dive into how intelligent layer seven load balancing can help these type of projects to succeed. I'm joined today by Andy Redmond and [Frankie Kado 00:01:41] . And today we're going to chat about these topics together, Andy and Frankie. Why don't you guys introduce yourself

Andy Redmond:
Appreciate it, Jason, and thanks for setting the stage. My name is Andy Redmond and I'm a technical sales leader within Kemp based out of New York. And I'm really passionate about helping customers make great decisions about how they're going to deliver their applications in public cloud environments.

Jason Dover:
Very cool. Glad to have you, Frankie.

Frankie Kado:
I've been with Kemp for just over three and a half years and worked within the customer support organization prior to transitioning into the pre-sales department. I chose the profession I'm in because I truly do enjoy the challenge of providing people with the right solutions to their problems and needs. Migrating essential applications to the cloud is an initiative that is being widely adopted by organizations of all sizes in every sector. And likewise intelligently load balancing, access applications published in the cloud environment is vital to ensuring use application experience is exceptional.

Jason Dover:
As we intimated already the topic that we're going to be diving into is that of cloud. And for both of your backgrounds, you've definitely had a lot of time working with customers, helping them to be successful in their projects. And just in prep for this, we talked a lot about trend that we're seeing, which is customers deploying applications in public cloud. And this spans both from deploying net new applications and building them with the cloud principles in mind, but it also entails taking existing applications that I may already have on premises and doing some tweaks and modifications around the edges so that they can run inside of a public cloud ecosystem. From your perspective, what would you say are the drivers that's leading customers to make these kinds of decisions?

Andy Redmond:
Jason, that's a great question. I think that there are a number of compelling reasons why a customer would deploy their apps in a public cloud environment versus on-premise, but today's environment it's essential that it teams can rapidly respond to business needs. And we saw with the advent of application virtualization. Just in time provisioning and deployment of new apps was able to be achieved. Now with microservice architectures and containerization, I think this dynamic has accelerated even faster.
So if you were to ask any high-level it leader, how many apps they have in their environment, you'd probably get a wrong answer from them and really having the requisite infrastructure to deploy and manage this sprawl has proven to be quite a challenge and cost prohibitive. So to frame this up in an example, what if a new app needs to be deployed based on the business needing to pivot for a competitive advantage, think of how long it can take to procure the hardware resources, such as a new server, you'd have to rack it up, you'd have to power it up.
You'd have to connect it to the network. And that's after you've received the security approval for the network connections and other related configurations to comply with best practice security guidelines. Perhaps another reason for out-migration to public cloud is promise of nearly unlimited capacity. Unlike on-premise environments gone are the days of hitting the limits of your data centers, power rack and cooling capacity.
So if you need the app published, you log into your cloud accountant provision, the desired resource. Another one is geo diversity as another compelling reason. So for example, in north America is relatively straightforward to provision an app in the West Central or East regions and having apps closer in proximity to users who are accessing them greatly improves their application experience by doing what? Reducing latency and so giving a better user experience. So leveraging different regions also enables if so desired, an active, or active disaster recovery architecture, which enables an always on application environment and more of that elastic scale capability as demand grows.

Jason Dover:
Interesting. I can definitely recall that in the early days of cloud costs was almost the number one driver for moving over to those ecosystems. I've heard some inferences from customers though that they have had just the inverse effect in certain cases where while the pay-as-you-go model starts off good and giving them savings over time, they sometimes run into bill shock. Can you, can you explain a little bit, why does that happen? Why does something that looks so sweet at the beginning gives me the ability to just pay for what I need wind up in some cases being more than what I was paying on premises?

Andy Redmond:
I mean, that's a great question. Considering the fact that it almost relates to my children, you give them candy, it's easy to consume. It's sweet to taste, they enjoy it and they keep wanting more. And that's actually some of the flexibility of public cloud. You have the ability of, of consuming what you want. Sometimes the mechanisms for control aren't leveraged appropriately and even in my previous points, you start talking to these it leaders and you ask them how many apps and quite frankly, they have no clue because of application sprawl and the ease of configuring deploying. It's almost a shadow it if I want to call it that a challenge that customers experience.

Frankie Kado:
So, yeah. I agree with both of what Andy's points here. I think the major idea, or rather something that couples with every cloud initiative is the need for cost management and containment, and I think it should be, it's something that's upfront and that should be considered before even going down the road of considering cloud is how the resources are going to be contained and how cost is going to be protected. And it's essential. It's governed those costs and covered those financial analytic tools and running apps in the cloud for sure.

Jason Dover:
Got it. So cost savings is certainly a possibility, but you've got to plan appropriately if you want to actually realize them. So we've been seeing a trend towards the cloud providers extending the services and capabilities that they have. They started with just a focus initially on having the supporting elements and plumbing for cloud native applications. But we've seen them extend that further along. If you look at the load balancing capabilities, firewall and VPN and connectivity, they've started to get to a point where in some slices, they're almost competing with your 'Traditional networking vendors.' Now, given that, where would you say that a traditional networking middle box fits into a cloud infrastructure, but put another way, are there use cases where customers need to bring their own router load balanced or firewall, et cetera, and have that deployed inside of their cloud infrastructure? And if so, why?

Andy Redmond:
I think you're highlighting an interesting dynamic. It's accurate to say in my opinion, that there are thousands of different resources and products a company can subscribe to within a particular public clouds' marketplace to meet their business needs. And by the way, I didn't say purchase, but subscribed to, and this builds into the previous thoughts. Think about the flexibility, this affords in not needing to procure new hardware, the timeframe associated with that, that over time will need to be forklift out of the on-premise rack for a newer, more powerful device as demand grows. That legacy approach is plain and simply not elastic.
With the thousands of options to choose from, it may seem a bit daunting to create what I'll call the service chain for an end to end application environment. And in this case, service chain simply means all the different devices that are required to architect an environment such as the firewall, the IDS IPS, or intrusion detection and protection products, the routers and switches, the servers, the web app firewalls, et cetera, as you've illustrated Jason.
So, I liked that term vendor networking, middle boxes, as you called them, they can greatly simplify the deployed service chain and management of a particular environment. So for example, an application delivery controller, what we commonly call a load balancer packs a variety of features and functionalities into a single box, which in turn simplifies deployments, ongoing management, and perhaps even troubleshooting efforts when issues arise.
So think about a scenario where your company has deployed the built-in public cloud load balancing products for external access into a respective application environment. And in this a configuration for layer four and layer seven deployments, these are discretely different products that are required. IDS IPS, web app firewalls, coupled with solutions to show performance metrics are also discretely different products. So basically four plus different products to be involved in that service chain to publish your applications.
Think about a situation when a user is trying to access your apps and is experiencing perhaps significant slowness or latency with that access to that app. Where do you take the network trace to determine if the issue is external to the environment or some other issue internally, you can't take it on the public cloud load balances as they don't have that functionality built in. The tool set is just not there with native functionality to enable simplified troubleshooting efforts. And this is a huge problem.
Even basic things such as deploying a new application workload can be rather challenging to perform the configuration on the public cloud products. So in my humble opinion, the bottom line is purpose-built products or networking. Middle boxes can greatly simplify, can configuration tasks, the ongoing management and troubleshooting engagements, which in turn reduces time to resolution and helps the company maintain a standardized SLA for application delivery with any public cloud environment.

Jason Dover:
That makes a lot of sense. And you mentioned a couple of interesting points there I'd envisage. What goes along with that is also management and standardization. I know in our conversation talking about your customers, most customers are not going 'All in on cloud days, zero.' You're likely going to have a hybrid infrastructure for some period of time, having a common set of tooling management framework troubleshooting framework that can be used both in your on-premises environment, as well as your cloud infrastructure helps to prevent customers from having that swivel share administration approach across those environments. So depends in a lot of reasons why customers are bringing their own devices, as you could say that to the cloud party as well.
It's actually a good segue to the next topic we wanted to cover. So we have an understanding of the drivers for moving to cloud, why customers are bringing some of their own technology stack into public cloud. But let's talk about apps for a second. We know that there is a drive towards building applications with cloud native principles in mind. Allows for a lossy connectivity to the back end database. Dateless at the transport layer, et cetera. However, in our customer base, we've certainly seen some customers that are taking existing packaged enterprise applications, such as those from Microsoft SharePoint, Exchange, and others and actually running them themselves inside of an IaaS infrastructure. What's the driver for that, as opposed to just allowing those apps to be hosted by a SaaS provider these days?

Andy Redmond:
Yeah. Wonderful question here. I think it comes down to maintaining control is it's truly an IaaS model versus the SaaS model. If we take a look at Exchange, we can can talk about this specifically. So really the desire to chart your own destiny and have have critical insight into the security of the entire application environment is 100% what many customers are looking for. If you look at products as products like Office 365, Exchange, SharePoint, et cetera, they're excellent. They're easy to consume no question, and that can easily be seen with the exponential growth of users signing up for and using the service. I mean, all you have to do is listen to Microsoft's quarterly earnings report to gain a sense of the growth. It's fairly substantial. However, it's managed by someone else. And because of that fact, there's really zero insight into the security and performance of that platform from a consumer standpoint. A customer that has their applications and application delivery products. These middle boxes in AWS, for example, may not want to use Azure active directory for authentication. And perhaps it's because the company competes directly with Microsoft in some manner. So maintaining complete control has distinct advantages to some customers. And I can certainly respect that.

Frankie Kado:
And I think there's another topic that's interesting. That is really a, it's a cloud challenge as a whole, but it is exponentially greater a challenge when you're discussing things like O 365 and that is where's your location? Where is your data actually being stored? How do you maintain compliance and security when you don't really know where your data is? And now when you're handling things with a software as a service solution, like O 365 that's in Microsoft's hands. And even in a cloud environment, when you're hosting Exchange, your data is traversing cloud networks, but lands on your Exchange servers that are deployed in the cloud.

Jason Dover:
That makes a lot of sense. An additional question on that then. Would you say that there are certain types of companies or industries or cultural characteristics that would lend a company to running their own apps as opposed to going with the SaaS option that's available?

Andy Redmond:
Yeah, absolutely. I mean, there are numerous entities, numerous customers, enterprise customers that compete directly with Microsoft. Let's think about it. You've got a SharePoint, you've got an Exchange environment and you absolutely don't want the authentication mechanisms or even flow of that authentication traffic to flow through your competitor that potentially has the ability of seeing that authentication flow. Obviously we know that not, not trying to malign Microsoft here because they have excellent compliance, but just from a competitive standpoint, I mean, this is the reality of how people think,

Jason Dover:
Okay. So control, security, compliance, competitive lot of drivers to still maintain control over your own apps depending [crosstalk 00:16:34] upon your industry. So we talked about Exchange a couple of times. Why don't we just park there for a minute? I know that we talked about one interesting use case you guys have recently have a customer who built a fairly complex Exchange deployment running inside of AWS. Now I can recall some years back actually managing and running Microsoft Exchange environments. And I remember that there was quite a lot of complexity and moving parts to that. Frankie, maybe you want to just give us a brief update on what does a typical Exchange topology look like in 2020 to give us some backdrop when some of the challenges that you'd have, if you did want to run that inside of a public cloud?

Frankie Kado:
Sure. And considering the vocal expression of that topology, I think the best approach is to really touch on the components of Exchange as they're deployed. I'll use Exchange 2016 as the focal point. Understanding of course the different version of the Exchange have different elements and components. 2016 you have the mailbox server role, which combines your mailbox server and client access server roles. And you have your edge transport server role, which handles inbound and outbound email from the internet.
This is a server usually deployed in the DMZ that does not have, or has not joined to your active directory domain for security reasons. Then there's active directory that's used by Exchange to store and share information between Exchange and Windows. And then there's the load balancer, the load balancers responsible for intelligently load balancing and health checking, the different components of Exchange to ensure the application experience remains always on and always accessible. Client requests, traverse a firewall and are intelligent you load balanced across Exchange mailbox servers. And we can discuss some of the Exchange specific challenges that arise when deploying in a public cloud ecosystem. So you have the essential communications that have to occur between Exchange servers in the cloud between mailbox servers that needs to be configured and that needs to be properly set up. And you have the secure transmission of data between these servers. So it's not only configuring the networking aspects of this communication, it's ensuring that you do so in a way that ensures the data is secure between the servers and as well as between the clients accessing those servers in the backend servers. And then if you're running a hybrid, for example, on-prem cloud environment, configuring communication and synchronization, rather between on-prem and cloud is also a major challenge that comes up. Now, let me just say that these challenges are met as cloud progresses. And even so a perfect example of this other quick-start tools that are provided by AWS, that'll deploy Exchange 2016 or 2019 servers automatically along with your active directory domain services and your remote desktop gateways, remote administration with the internet. And I think that, that pretty much wrapped up the idea or the typology of Exchange.

Jason Dover:
Okay. So certainly still a lot of moving parts, just as I recalled, it's multi-tiered, you've got synchronous and asynchronous flows. You've got to deal with state fullness at some levels, you've got databases and you've got external services like active directory, DNS certificate authorities even play a role in a successful Exchange deployment and a Microsoft ecosystem. So thinking about that, customers have typically been accustomed to having that all sitting inside of their data centers on their 10, what type of considerations does a customer need to take into play when you're trying to get this running in an environment such as AWS.

Frankie Kado:
When it comes to migrating apps, or even new deployments of apps in the cloud, when you're just... if you're a discussing Exchange, if you're discussing SharePoint, whatever it is that you're migrating to the cloud there, the challenges are going to come up naturally, you're going to have to discuss them and go over them. So let me just go through the top three, that in my opinion, are keeping organizations from making the transition from deciding to start running apps in the cloud.
Generally speaking, cost management containment is discussed being a benefit of running apps in AWS, as opposed to a challenge. However, the on demand and scalable benefits of cloud computing services also introduce difficulties around defining and predicting quantities and costs. The challenge here is met by organizations having to conduct better financial analytics. As I mentioned earlier, and reporting automation put in place to govern costs and diligent reporting management could obviously decrease this challenge and, those automation tools are provided in some of our cloud infrastructure, such as AWS, as I mentioned earlier as well. Then there's migration processes around deploying a new hosted application in the cloud have developed into rather straightforward processes. That said, migrating existing on-prem hosted applications to the cloud remains to be challenging.
Organizations are facing things like the need for extensive troubleshooting, the obvious security challenges, the slow data migrations setting up and utilizing these migration agents, be cut over complexity and the always loved and accepted application downtime. Now, good planning and processes are going to obviously lessen the impact of the pain points that I just mentioned. And none of these challenges should discourage organizations from taking part in the benefits of running apps in the cloud. There are measures that can be taken in products that can be leveraged that can make moving these apps a more enjoyable process.

Jason Dover:
So certainly a lot of challenges that need to be considered when it comes to getting apps over into the public cloud, specifically for Microsoft Exchange, are there any unique things that need to be considered or challenges that customers have to overcome if they wanted to get those benefits we talked about and have their ecosystem running in a public cloud like Amazon.

Frankie Kado:
So many customers I've worked with run hybrid environments where our load balancer is running GSLB to provide active passive disaster recovery site to site redundancy between on-prem Exchange environments and cloud Exchange environments. This model is interesting in that, yes, we're conducting the intelligent health checking and the load balancing to ensure that site to site redundancy and resiliency is being leveraged, but even more so interesting about this model is it can be used as a migration approach.
It can be used as a permanent deployment where on-prem cloud redundancy is being leveraged consistently, or it can be used in the phased approach to migrate the organization's entire Exchange infrastructure into the cloud. In that you run an active, passive deployment while you're bringing up your cloud environment. And then with a simple change of the settings in GSLB switch over to an active-active, and even so to a full on cloud application.

Andy Redmond:
Yeah. I mean, you bring up some great points, Frankie, and you've got the ability with GSLB or global server or site load balancing as function that in load balancing terms allows you to have those multi-site environments. But the great thing about it is you're able to groom users, as you want to into those public cloud environments, anytime you want. And it provides you the flexibility of, of doing that cloud migration that so many customers are interested in doing.

Jason Dover:
So very interesting. You noted how the native capabilities of a load balancer GSLB can be leveraged, A helping with ensuring resiliency, a DR as a service deployment model, which is critical. It's funny, just an anecdote. When I used to work in banking, email was always seen as non-critical production because it wasn't connected to the trading systems and it was always non-critical until it was down. And then it was very critical. So having a resilient strategy is important. And it's also interesting that that same functionality can be leveraged as a customer's going through a migration process from on premises into cloud. On that point of resilience though, are there native baked in capabilities within public cloud ecosystems that can be leveraged for ensuring that your Exchange environment is resilient? And if so, how do those two elements work together?

Frankie Kado:
The ability to deploy across regions. AWS provides the ability to deploy across multiple regions for that added resiliency or redundancy. And now each region is going to consist of two or more availability zones. So it's redundancy upon redundancy essentially. And if you're not using that structure to design your deployment and deploying your resources across these separate availability zones and regions, and you're losing the point of why moving to the cloud is so attractive and that scalable and that resilient infrastructure. So as you mentioned, yes, we have your highly available deployments with regards to the application load balancer, as well as your GSLB. But if you couple that with your availability zones and your regions, downtime is almost impossible.

Andy Redmond:
Yeah. I've got an interesting point on this. Jason, you talked about the use case of the banking vertical. If you look at the legal field, I mean, they bill based upon their communication with their customer. Ensuring that you've got an email that always works and that you can engineer it for success. And when there are potential issues that you can quickly figure out what's going on, you can only do that with your own environment, your own managed environment that you've engineered for that purpose. Try and do that with a SaaS based product, we're experiencing email delay, where is it? Where are the tools to do that diagnosis? They're just not there, but you can, when you engineer your environment, have that critical insight, which is key.

Jason Dover:
Makes A lot of sense. Shifting gears for a second, let's talk automation thinking back to first principles about what's driving cloud adoption. One of those things is just being able to get stuff done faster. How do I get stuff done faster as IT? Why leverage automation, the things that are repeatable, but don't require a high level of thinking, a leveraged tooling to do that. Now we know that the public cloud platforms provide some frameworks for making automation achievable. What role does a load balancer play in that ecosystem and in helping customers actually achieve that vision?

Andy Redmond:
So, a load balancer may or may not contribute to automation. And it really depends on the relative simplicity of the API functionality and the security of it actually. In Amazon specifically, a customer can leverage AWS is Lambda product as the engine to automate various tasks, such as load balancer, deployments, and configurations, and perhaps specifically for auto-scale as user demand increases, or even cycling configs to refresh authentication profiles, to further harden the environment and simplify these sorts of critical tasks.
You get to enable more control of the environments as we've been discussing versus simply consuming a SaaS service that someone else manages. There are definitely pros and cons with each respective approach. I think a final point that I'll make about automation and public cloud, and specifically regarding load balancing is the whole notion of automating fail-over during outage events. So horizontally scaling out load balancers to enable elasticity is as user demand increases or decreases within a geographic region, or even across geographic boundaries, enables far more control when failures in an application environment occurs. That the net result with automating horizontal scale is a significant reduction in what's been turned blast radius during failure events. In the legacy model of scaling up the load balancer and configuring perhaps hundreds of services on an HA pair, well, when an issue occurs, the radius of impact can potentially be massive. In this regard, the next gen approach of automating horizontal scale with the lbs ensures a much, much lower radius of impact or blast radius.

Jason Dover:
A recent survey with the customer base. And we found that automation was in the top 80% of drivers for new IT initiatives, but there was a huge Delta of customers that thought they were actually going to be able to achieve their objectives within the next 12 months, give or take. So certainly an opportunity to help customers as they go in that direction. Shifting gears back to security, which we touched on a number of times already with applications, more applications being deployed in public cloud security certainly is a concern.
We spoke about email a number of times, just as one of the use cases. And it's these apps where most security breaches are actually coming from, whether it's ex-filtration, inbound attacks, et cetera, what needs to be considered when you're deploying one of these apps. You mentioned earlier, Frankie evaluating the cloud providers capabilities themselves, but when it comes to how I actually configure the workload itself, what types of things would you be recommending customers to think about to make sure that the app itself is secure?

Andy Redmond:
This is definitely an interesting question. And on the top of the mind of basically all the customers that I have these sales calls with, and there are some companies that will leverage a VPN to access an AWS environment. And then after they've done that, they get access to that app. And there are some that natively publish their apps externally for access for their user base. And they leverage comprehensive, single sign on authentication functionality, which is built into the load balancers. You can take it a step further and leverage, specifically talking about AWS, you can leverage Lambda automation to automate authentication profiles on the LB, the load balancer, and in that a configuration, you have a much better model to discourage threat actors from doing naughty things, because you can set up to rotate those auth credentials, which quite frankly they should be doing. I mean, just the basic guidance that we'd been provided historically, to change your password on a regular basis, you can automate that function to simplify and standardize and ensure a solid SLA for the security aspect of it.

Jason Dover:
That makes a lot of sense, Andy, certainly a lot to consider when it comes to publishing applications beyond the firewall. It's funny these days where the boundary of the firewall actually lives becomes a bit fuzzier with the expansion of the edge, but let's dive a little bit deeper on those security models that needs to be taken into consideration when it comes to authentication, you touched on some points that allude to the concept of pre authentication. Frankie, maybe you want to expand a little bit on what the value prop is around pre-op specifically as it pertains to cloud published applications.

Frankie Kado:
Discussing pre authentication is essential when you're, when you're looking at security and the way that you can implement things. Having the ability to authenticate at a control point that is sitting in front of your backend applications enables you to ensure that users are not, or attackers are not actually accessing or reaching your backend application when the attempt to authenticate is being leveraged or being enacted upon. So it's essential that you're separating that where you have this control point that is not directly connected to your backend application where you're handling authentication.

Jason Dover:
Now, obviously you're typically not going to just have one application running in the cloud if I'm doing pre authentication. Does that enable me to impact the user experience that customers have to go through when they're authenticating against multiple apps?

Frankie Kado:
Absolutely. So the control point in which you are conducting your pre-litigation or managing pre with indication, should we be leveraging single sign on where that control point is managing authentication to multiple apps and authentication to a single application should allow for authenticated access across all the applications being published at that control point.

Jason Dover:
Now, is there anything else that I can do to ensure that my app access is actually secure? And you mentioned about essentially moving the attack surface from the app service up to a service control point, which makes a lot of sense. Is there anything else that I should be considering when I'm starting to publish apps in the cloud to make sure that it's actually secure?

Andy Redmond:
Not only is your authentication mechanisms that you've configured and providing you that level of security but encryption. When you look at some of the published reports that are public four out of five attacks are at the SSL TLS layer. If you can not only standardize the TLS version, the ciphers, the encryption ciphers that are used, but standardize all of that in a single control point, and then provide end-to-end encryption for those backend resources. You can further harden that environment and provide visibility with respect to if a certificate is coming out of compliance or if a time for death is near approaching, you can automate the... You can further leverage, take for instance, lambda that to automate the task of renewing your certificates and doing that end configuration work.

Jason Dover:
I mentioned certificates there, and that's actually a good point. You're normally leveraging the load balancer that's front ending, and proxying the application service as a termination end point for doing decryption since I'm doing decryption, are there other security services that I get an opportunity to apply to traffic flows?

Andy Redmond:
Yeah, that's a brilliant point. Because as Frankie mentioned, you do not want customers. You do not want users directly terminating their sessions, on those services because you have zero visibility, zero visibility with respect to layer seven based attacks. If you leverage the load balancers that control point to decrypt that traffic, then you have the ability of inspecting the traffic and leveraging other tools like intrusion detection and protection, and also application firewalling functionality to able to potentially black hole.
Malicious traffic, especially if you're talking about a zero-day attack where there isn't a signature set, that's currently defined, you have the ability of doing a content rule and then black holing that traffic into infinity, or perhaps even into some type of a honeypot for investigation at a later point. So having the ability of standardizing the access point for those users, 100% critical knowing that data breach is the number one problem on the internet today.

Jason Dover:
So it certainly sounds like as customers are contemplating taking applications specifically those that are not cloud native and have some componentry that's maybe more lint to being in an on-premises ecosystem, they really should be considering the load balancing component, because it can help with a number of different things. We talk compliance, we talk resilience, we've talked security, we've talked availability. It really is a core component to successfully getting applications migrated into a cloud operating model. So as we conclude, Andy, maybe let's start with you. What's one thought you'd want to share with the audience as the key takeaway to keep in mind if they're considering bringing apps from on premises and to a cloud environment like AWS.

Andy Redmond:
I mean, look, as we've talked about, people want control. They want control for a couple of different reasons and we've discussed the value of that and we've discussed what you can yield from deploying apps in your own ecosystem. But look, as previously noted in public cloud, there are lots of options, thousands. So it's really important to ensure that several factors are considered and cost without a question as a factor and perhaps more important today with the economics challenges that we've seen recently.
We know empirically that automation, simplicity, and flexibility is super important. That's what IT leaders are talking about. But HQ mitigation and vendor support is just as important when problems arise. And it's wonderful to buy a shiny new object, but when issues arise, you better have the mechanism in place. You better have that vendor support to ensure that you can reach mitigation really quickly and bottom line choose carefully.

Jason Dover:
So same question to you, Frankie, what's your one takeaway for the audience?

Frankie Kado:
The challenges versus the benefits' conversation with regards to running applications in the cloud. I think the sheer growth of cloud alone has spoken to the fact that the benefits astronomically outweigh the challenges. And if you approach running applications in the cloud or the migration to do so with enough planning and processing, it will surely benefit the end result. As Andy mentioned, involving the correct devices, the correct vendors that are going to provide that exponential support, they're going to be by your side, the professional services' engagement, whatever you would need to, to essentially iron out that process is essential in the planning of this, of running applications in the cloud.

Jason Dover:
Andy, Frankie, thanks a lot for the time today. It's been great chatting with you. Can't wait til next time.

Andy Redmond:
I appreciate it.

Frankie Kado:
All right. Thank you, Jason.

texture

Get Started With Kemp today

Take a Trial

Experience the benefits of Network Telemetry on LoadMaster today with our free 30 day trial.

Start free trial

Request a Demo

Request a demo with one of our engineers to see how you can leverage network telemetry with LoadMaster.

Request Demo