default-focus-right

Operating Modern Application Environments - AIOPs, ITOA, Analytics & Beyond

Episode Overview

In traditional IT organizations, engineering and architecture groups have the luxury of defining and describing what should be... Infrastructure and Operations teams, on the other hand, assume the unenviable responsibility of dealing with the world as it really is, experiencing the highs and the lows of day 2 operations, traversing the challenges of services being leveraged in ways they were never intended, predicting how things might go wrong and having a corresponding toolbox for each possible eventuality. This combined with recent trends that emphasize greater degrees of change and agility within a company’s IT landscape means that things will break. As a result, there’s been significant transformation around the methods and tools applied to operate IT infrastructures along with the needed skills. For example, analytics no longer is limited financial data modeling but is now being applied to day to day incident management – ML is no longer just for cracking enemy nation communications – and AI isn’t just a Hollywood concept but is now weaved into the tooling leveraged for monitoring networks and apps.Today, we’ll dive into these changes, how they’re impacting enterprise IT and cover how their power can be harnessed for increased efficiency. I’m joined today by two of my colleagues and friends, Benjamin Hodge and Barry Gleeson

Jason Dover photo
JASON DOVER

VP, Product Strategy, Kemp

@jaysdover

July 29, 2021

Jason Dover:
In traditional IT organizations, engineering and architecture groups have the luxury of defining and describing what should be. On the other hand, infrastructure and operations teams assume the unenviable responsibility of dealing with the world as it really is, experiencing the highs and lows of day to operations, traversing the challenges of services being leveraged in ways they were never intended, predicting how things might go wrong and having to have a corresponding toolbox for each possible eventuality.
This combined with recent trends that emphasize greater degrees of change and agility with any company's IT landscape means that things will break as a result. There's been significant transformation around the methods and tools applied to operate IT infrastructures along with the skills needed. For example, analytics no longer is limited to financial data modeling, but it's now being applied to day-to-day incident management. Machine learning is no longer just for cracking enemy nation communications and artificial intelligent isn't just a Hollywood concept, but is now weaved into the tooling leverage for monitoring networks and applications.
Today, we'll dive into some of these trends. We'll look at how they're impacting enterprise IT, uncover how their power can be harnessed to increase efficiency. I'm Jason Dover, VP of product strategy at Kemp Technologies. And you're tuned into the Application Experience Insights Podcast.
I'm joined today by two of my colleagues and friends, Benjamin Hodge, and Barry Gleason. Hey guys. Why don't you introduce yourselves?

Barry Gleason:
Hey Jason. My name is Barry Gleason. I'm a product manager at Kemp Technologies. I'm based in Limerick, Ireland. I've been working in tech for over 15 years now. I like the change in tech and planning and embracing all these changes coming our way.

Jason Dover:
Excellent. Great to have you today Barry. Ben, why don't you introduce yourself?

Benjamin Hodge:
Sure. I'm Benjamin Hodge. I'm the principal technical advisor here at Kemp. I heard Barry say how long he's been in tech and my mind's gone blank. I just think too long is the answer. But I do hear you Barry. I was thinking about that as well. I think it's an exciting time to be in tech. It's a challenge with how rapid things are changing, but it's also, opens up a lot of opportunity. And I think the other thing is just how much diversity you get and even in your day to day with the type of problems and people and challenges that you're working on, I find really interesting.

Jason Dover:
Excellent. Thanks for joining us today as well Ben. Happy to have you guys both. Barry, why don't we start with you, I know earlier in your career you worked in operations and the support side of the house as it were, prior to moving into a product management side. Can you give us a little bit of background of how being in the trenches in ops, how has that colored your view of the world now that you're actually architecting application solutions for customers?

Barry Gleason:
Yeah. I'd consider myself pretty lucky because I've worked in a number of diverse roles. I've worked in customer support, consulting. I've worked on development teams, product introduction, network ops, network engineering and currently product management, and distilling all of that into how I view things, I would say that I met a good vantage point in that I see the different, I suppose, priorities and different viewpoints of all those different roles.
I think sometimes what is of highest priority to the development team may not be the same as what's the highest priority to the IT operations team, for example. It's easy for, for example, or if you think of vendors and service providers, it's easy for a vendor to tell us, or tell someone, hey, you need to patch this problem with a new firmware without really being able to realize what that means to that user, does it mean planned works it's going to require engineers coming in at midnight to complete, does it mean informing customers and things like that.
So, all in all the way I see, I suppose, architecting these solutions as it's about being able to gather all those priorities and actually create solutions that meet them all. And that could be through how you deliver upgrades, how you deliver a software, but also even how information is conveyed, what information is available to the stakeholders and so on.

Jason Dover:
That's an interesting point Barry, and I guess that's why we see today so much emphasis on culture. It's how do you take, engineering groups architecture, operations groups, and create a unified culture so that they can be more efficient and get to where they have the same goals and are trying to actually accomplish the same things.
One thing that reminds me of, you do oftentimes have this split in organizations. You may have a group that's focused on the architecture and how things should be rolled out. Things may not always pan out according to plans, once those applications or workloads get into the real world. And-

Barry Gleason:
I think we all that goes without saying.

Jason Dover:
Right. I used to work in operations by self in the finance industry. And one of the key we had was, you'd follow the guidelines that was put in place from an engineering and architecture perspective, but an area that was often missed was how do you actually monitor that thing once it's out into the real world, and more and more we've seen that come up even in our spaces, we're talking to customers that getting the right level of monitoring along the right dimension is something that's still a challenge for many organizations. Just taking that a little bit. How have you seen the requirements in this space evolve and change over time and what are customers looking for when they say, hey, we need better monitoring?

Barry Gleason:
Yeah, I actually think it's one of the things that's changed the most during my career. If I think of, say when I worked as a network engineer, I mean, this was very ... it was your typical monitoring interfaces like SNMP and SYSlog. And there was an assumption that things shouldn't happen. The network is static, if something happens in new to investigate it, apart from planned works and things like that. And then when you did have planned works, the job of the network engineer was to, while these planned works were occurring, look at these logs, look at these messages and basically decide, is that because of the planned works or is this something that there just happens to be happening at the same time that I need to look at and fix?
Now, that was maybe just about manageable in that scenario, but networks these days are changing ever more rapidly. If you think of how software's rolled out now, if you just think of the frequency of those changes, those old systems just aren't fit for purpose anymore. So, I think now what's needed is systems whereby change is normal, but where our monitoring and our analytics can actually distill the normal from the abnormal, if that makes sense?

Jason Dover:
Got it. So, it's really coming down to expecting and preparing for change and being flexible enough to understand is the outcome what was actually intended or is the outcome actually an anomaly? Is that kind of how it goes?

Barry Gleason:
Exactly.

Jason Dover:
Okay.

Barry Gleason:
Yeah.

Benjamin Hodge:
I think too, that point around the network being assumed to be relatively static in the past, I think to a large extent, the networking space did lag behind a lot of the rates of change that we saw in application teams and SYS admins around server virtualization and things like that, because the network was still a relatively stable kind of layer around that, but that's certainly not the case anymore, and I think that the adoption of kind of DevOps practices in the networking space is a response to that, but also the diversity now, there's just so much distribution and infrastructure now, like again, it would have typically just been within your LAN, maybe a remote data center, but it's still a kind of controlled perimeter that you had. Whereas now, there's so much traffic between locations and client side connections to things in the cloud, things on-Prem things in your data center, things in a service provider, and from a network perspective being able to piece all of those things together and all of those different flows from all of those different systems and infrastructure, just the level of data sources and the level of information now, again, to the ratios of administrators is just so much higher. And that really makes it hard as well, just for any human being to process that level of data.

Jason Dover:
Right. I mean, that network perimeter that you mentioned that the line is really getting blurry these days, between what is the secured boundary between the private internet and the public side of the network or the internet, when you look at the recent trends around BYOD, borderless, et cetera, it really is a lot more challenging today to really understand what's going on in network and how to diagnose issues.
To that end, maybe let's transition a bit in talking about some of the methods that are being explored and leveraged these days. You mentioned an interesting nugget there Ben, about perhaps it being too much for a human to process. This reminds me of one of the too many acronyms that we use in the IT space, but that of AIOPs. And of course, you've got that AI word in there for artificial intelligence, maybe just expand a little bit on what AIOPs actually is. Is it just a trend? Is it just a buzz word? Or is it something tangible and how are organizations actually starting to take this concept to help them when it comes to monitoring and managing their networks and apps?

Benjamin Hodge:
In terms of AIOPs and is it a buzzword or is it a real thing? I mean, like all kind of terms, there are people who use it as a buzzword and there's definitely buzzword bingo depending on the source. You'll see it, but it is solving a real problem. There are real opportunities to leverage artificial intelligence systems and techniques in IT operations domains. And again, that crosses security, networking, application performance, all kinds of domains there.
I think one of the things I see, particularly from vendors that I think isn't helpful for people is the conflation between AI and machine learning, because machine learning is a very specific technique and type of system within a broader category of AI. So, it's just one of many, many techniques available.
It's very high profile in the types of problems that's been able to solve in other domains, but it has many limitations as well. And there's a lot of other really key system types, or methodologies within that class of AI that can have a lot of benefits, but also is much more relevant in certain use cases than machine learning. And that's a real key thing I think for people to have an awareness around and making sure that machine learning isn't being swung around in every kind of problem in every kind of space and expecting to get good outcomes there.

Jason Dover:
What's driving it? What's driving this interest in this space on the customer side, looking to bring in a new approach, but then equally on the vendor side to kind of jump on this train, why is this starting to get some groundswell?

Benjamin Hodge:
So, it comes back to that idea of alert fatigue, that idea of just even being able to review the logs and review the monitoring systems, the time it takes, the amount of data that is feeding into that on a, even per second basis, it's just so hard to really pull out what matters. And it's just not plausible anymore. So, obviously one of the big tasks often thrown into that AIOPs space is to pre-process, pre-filter those raw telemetry feeds, logs, traces, events and so on, metrics and to pull out key events that require attention, and really distill that down to usable insights that has meaning, and point the operators where their attention, where their energy needs to be.
So, just that kind of classification and filtering, it's one of the key things, and obviously that can be done by rule-based systems and expert systems that embed kind of domain expertise. But then, particularly when you get into the security space, a lot of it's around anomaly detection. And again, Barry, you kind of mentioned that, is this normal, is this expected, or is this something I need to look at?
Security becomes very difficult from a rule-based perspective because so much of what, kind of malicious actors trying to do is to obfuscate their actions under the guise of normal activity. And so, you'll kind of see this come up in the security space much more regularly when you're trying to do this. And again, that's one of the things where machine learning can bring a lot of benefit. But it really is fundamentally it's about trying to leverage AI to consume and process the raw data and distill that down to the key information and the key requirements for an operator to take action.

Barry Gleason:
Yeah. I think it can nearly be surmised as surfacing what's actually important because like you mentioned Ben, if you think of our systems, the amount of data that are outputting has just skyrocketed over the last, 10, 20 years. And basically filtering that down to present information in a manner that the not so important doesn't need to be looked at, and the important stuff does. And I know it's a very, kind of a simplification of it, but that is massively complex. In groups I've worked in before, and I think it goes across all service providers, there's always this institutional knowledge that this maybe not written down or that can't be written down. And it could be as simple as if something happens node A, it's normal that, site X and site Y are affected, but maybe it's not normal that sites Z is affected. And with AIOPs, if we can actually look at these relationships, there's massive value that can be surfaced.

Jason Dover:
Got it. Makes a lot of sense. So, we started out the conversation and we talked a bit about the fact that there's a lot more change that happens in networks and application ecosystems. They're not static, where we're kind of, gone are the days that you only do updates on Sundays at 2:00 AM, and organizations need to allow for more frequent change and metamorphosis environment just so that they can keep their businesses up date and remain competitive. With that, how does the power of AIOPs help organizations as it compares to the traditional way that monitoring and troubleshooting was done? Which was far more reactive. Oh, something went wrong, let's get in the war room and figure it out and look across all domains. How does the AIOPs approach differ from that traditional model that all of us probably grew up in?

Benjamin Hodge:
I'm not sure it really differs at a foundational sense. I think it really augments process though. There's a couple of things that you're trying to bring into there that, you're starting from a higher level of situational awareness. So again, you're looking at the level of events and insights as opposed to raw data and telemetry, so that those kinds of conversations are much clearer and decisions being made are more informed. Obviously again, one of the things we didn't touch on earlier though is the potential for predictive systems and being able to catch potential incidents in advance. And you kind of see this with things like predictive maintenance.
And there's various kind of situations around physical systems and things being able to predict wear and tear and system failure. And I really don't see that out in the market at this time in any meaningful way. That I think is when you could see that real potential shift where you go from that firefighting reactionary stance to a much more proactive stance, and the system being able to guide you to take action in advance of any kind of user or business impact. That I think would really, really tilt the scales. But at this point, I think it's much more an augmentation of what we do than a replacement for anything.

Barry Gleason:
I think that raises massive potential even from vendor point of view. If you think of a typical customer, they only have experience of what happened in their network, and if you think of the ways in which a vendor could utilize an experience and patterns across multiple customers, multiple service providers, and be able to distill those patterns into, as you mentioned Ben, early detection of potential faults, that could be massively powerful across all customers.

Jason Dover:
How far do you think we are away from that? It sounds from what you're saying that we're still kind of in this transition phase, the technology still being a metadata bit, and we're still finding exactly how to apply it to help teams in practical ways. How far are we away from that kind of ideal scenario that you're describing now, would you say?

Barry Gleason:
I think the challenge in the past was data, there's incredible stats on the amount of data that's currently stored, whether that be within IT systems or whether that be cloud based data. I don't think we're massively far from it, because this has been applied in other industries. And I can see this kind of moving fast.

Benjamin Hodge:
There's a couple of key things that need to change I think in the way that problem is addressed by the market for it to really get to the point it needs to be. It's not so much a technical constraint as I think that the current approach is, again, maybe looking in the wrong direction. And I think the big thing comes down to, again, when you see where this is applied often, when you have really regular definable relationship, it's much easier to get to some predictive model and solution, but the diversity in applications, the diversity in networks, the diversity and even Barry, as you're saying like, what is or isn't normal depending on the customer. What a normal level of traffic is, what normal patents look like, these kinds of things.
It's so diverse and there's so many interdependencies in computer systems and networking systems that aren't as linear as some of the other use cases in industries where things like machine learning has been used for this predictive work. There's a lot of thinking and experimentation needed to crack that. The big issue is around trust and explainability in this space. And particularly when you start getting into critical infrastructure.
When you're surfacing something in a human insight and allowing them to assess it more clearly, and you're really doing essentially very advanced classification and sorting tasks, that's okay. But once you start going into higher levels of autonomy and predictive maintenance and those kinds of things where you can have really significant impact, I think there's still some gaps in where the systems are today and that kind of problem space.

Jason Dover:
And is that a technology problem to solve? Is it about vendors being more thoughtful in terms of how they implement these solutions? Or is it a matter of vendors working more closely with customers?

Benjamin Hodge:
I don't think we're talking about gaps in foundational research. Let's put it that way. I think it is about vendors in our kind of spaces becoming more savvy around the AI space, having more experience and knowledge, again, of the various tools and capabilities available and what type of problems different techniques are most suited for. And then, there's just a huge amount of investment embedding institutional and domain knowledge into these systems. And that itself just takes time. And so, I think just as an industry says a lot of domain expertise in vendors in IT operations, they know what good looks like. They know what needs to be done if asked, and then you have this whole AI problem space and people that are very adept at that. But the crossover between those areas is very light.
You don't have AI experts that are also infrastructure experts. You don't have infrastructure experts that are also AI experts. So, there's just a gap there where you're always trying to do some kind of translation between these domain expertise to get to a certain outcome. As this progresses, you will start to have that crossover in skill sets, and you'll have these multifunction individuals who are able to bridge that gap and really understand both sides of the puzzle. And then that's where I think we'll start to see a lot more sophisticated solutions.

Jason Dover:
I definitely want to revisit that topic Ben of domain expertise. That's another very important factor that can't be ignored. Before we get to that though, I want to just shift gears slightly and talk about analytics for a second. We've certainly seen recently, this is a hot topic, it's on the top of customers' minds. Vendors are looking at how they can move from just providing traditional monitoring to providing actionable insights that customers can really make trustworthy decisions based on. Can you maybe give a bit of color on how analytics is starting to color the way that operations team work and what is it that customers are really looking for from their vendors when they say, "I want to make sure that you're providing analytics for me."

Barry Gleason:
I suppose in the past, the challenge was actually having the data points that were needed, in a lot of cases, it was just difficult to actually retrieve data after the fact. Whether that be investigating reasons for outages or whether that just be general high level review of how your system's operating and so on. Recently, more and more data has become available. This is sometimes misnamed as analytics. It's a case of, hey, we've all these numbers. We're going to put [inaudible 00:26:29] on it. We've displayed analytics. That does the term disservice, because analytics and what the user wants and what the operators need, they don't just need more data. They need the data they need at that time. If that makes sense?
If we think about an incident that may occur, having 12 different systems with hundreds of graphs isn't much better than having no data at all sometimes, but a successful analytics platforms, will be able to present the user with the data they need when they need it. That's the way I would see it.

Benjamin Hodge:
Yeah. There's definitely a lot of things that are essentially data visualization systems under the label of analytics. There's no analysis whatsoever going on, they're purely splatting the data in a shiny way and still relying on a very skilled individual being able to interpret that. And I think Barry that's the key thing that definitely sticks with me in terms of what actually needs to be provided versus what we're often seeing be provided. Showing the data in a relevant way is really key to communicating that insight, but it is very much about showing them the information they need in the way they needed at the time they need it, and being able to adapt to the context is a really key attribute that's ultimate missing.

Barry Gleason:
And I think it's just about being able to tie some of these key things together and present them to the operator at a critical time. And that can be massively valuable.

Jason Dover:
It sounds like you're basically describing true analytics as the methods in place to provide contextualization based on that particular situation and surfacing what needs to be looked at right at that given time, while providing a way to ignore the stuff that doesn't really matter, the noise. Would that be how you would characterize it?

Barry Gleason:
Yeah, exactly. If we can surface that information, and I'll even go simpler, sometimes the information that might be needed by an IT operator, even getting out of the techy stuff, it could be things like the list of stakeholders for that particular app or that particular device, or who needs to be contacted. It's almost like pulling all this stuff together and making the IT operator's job easier.

Jason Dover:
So, this actually raises an interesting question and something we touched on briefly earlier, which is that of alert fatigue. We're basically talking now about putting new systems, new models into customer environments. Customers already have broad generic systems for doing some level of monitoring and observability. They have point solutions for specific workloads, for specific parts of the application ecosystem. How do you square this and avoid the operator from just getting overloaded with dozens and dozens and dozens of alerts and insights that he or she doesn't have time to really go through? How is that challenge being addressed today?

Benjamin Hodge:
I think this is certainly been, there's a lot of attempts to address this, but it's certainly not being effective. If you talk to people on the ground, there's still a big gap here. A lot of this is trying to be addressed through simplistic rule-based system. If this, then that kind of logic. The benefit of that, it's very easy for customers and end users to create rules that are meaningful to them. And it allows them to adapt things to their own context, but it's too simple and it's not powerful enough to really embed the main expertise into a system. And a lot of the systems that are capable of that, tend to have domain specific languages that are very hard to leverage and they have other constraints as well, but there is the ability to have very good complex, sophisticated domain expertise embedded into systems.
They are explainable, dependable, and they can have a really big impact there. But I think that's an area that just really hasn't been leveraged effectively tying those kinds of systems together with the benefits of anomaly detection and correlation and other things that machine learning can bring to bear on very dynamic, on those fuzzy areas. That's where we'll start to see a lot more capabilities being offered to customers.

Jason Dover:
Okay. So, still more work to be done in that regard. Here's another question for you, what type of considerations should teams building apps keep in mind so that they are easier to operate and automate troubleshooting in production? What are some of the principles that those groups should keep in mind?

Benjamin Hodge:
I think this is an area where you see a lot of helpful practices, all kind of rules of thumb coming out of the microservices space, because microservices applications are distributed systems. They are inherently more difficult to trace and troubleshoot and understand the interdependencies and the relationships between parts of the application. And there's a lot of focus and you'll kind of hear the word observability a lot there. Again, a lot of focus on metrics, events, logs, and traces, but a lot of it is about having the opportunity to embed those practices within the application itself.
So certainly, for ops teams being close to your development teams, building on that kind of DevOps culture mentality of cross functional teams and being there as part of the creation process so that you can ensure that the types of feeds that matter to you are there and are available. That's one proactive thing. Working with stakeholders, particularly when at the line of business application, someone else is going to be the key decision maker on that, having some criteria, some policies, so that operational considerations are given when line of business applications selected is else you can proactively do to ensure that, that's at least part of the selection criteria. That'd be two things I can think of.

Barry Gleason:
I almost see observability as like a mantra of how your applications are developed. It's not something you can do after the fact, you can't say, okay, let's turn on observability.

Benjamin Hodge:
Yeah. It's an attribute of the system. Molding a feature. And I think I actually, I really liked that. I like that framing of it. It's an attribute that you need in your systems.

Jason Dover:
You mentioned domain expertise about working with stakeholders, et cetera. There's certainly some fear that some folks have that the concepts we've been talking about, AIOPs, ML, et cetera, are going to replace operations jobs. From what you're saying, I don't get the feeling that, that's actually true. Maybe you can just give a little bit of context and that. Is it more about working with these systems and integrating them as a part of your workflow, or is it a matter that businesses and IT practitioners can just sit back, put these systems in place and they take care of everything for you?

Barry Gleason:
I wouldn't say it's just going to suddenly replace all these jobs. For me, I nearly think of it in the same way as, if we went back a hundred years ago, a driver of a car was also the car mechanic. It's almost the same where perhaps you still have a driver of a car, but they're focused on other things like driving rather than being able to maintain the engine and stuff like that. So, I think what AIOPs would provide is a way for some of these tasks to be automated, but then your IT operator will be more concerned about the delivery pipelines, about how to integrate new applications or new devices into your network, how to manage all that while using the system as almost like a colleague.

Benjamin Hodge:
Yeah, I think it's that enhancement and old augmentation more than anything. It's really about lifting the operator focus on business outcomes and achieving business goals. Even if they're internal IT goals, even if your stakeholders or other kind of IT teams focusing on intent and outcomes rather than the low level plumbing, definitely needs to be seen as enhancement rather than a replacement for the operator.

Jason Dover:
So to that end, what are the types of skills that IT professionals should be focusing on developing to maintain their relevance and be useful in ecosystems that are augmented with the type of tooling that we discussed today?

Barry Gleason:
Communication. How you work with different teams, how you communicate to stakeholders. I would say IT operators will need to understand DevOps principles, how we can continuously be changing the network, changing the application and how those pipelines work. Even software, how software works, how software updates work. That's the way I would see it.

Benjamin Hodge:
I think when you start to see any kind of role become more sophisticated as that industry, or as that job role evolves and the technologies around it become more sophisticated, it does become a lot more about stakeholder management. That's something that is always going to be within that human realm and knowing how to understand and interpret the goals and intent of the various stakeholders of that system, and be able to translate that back to the system itself, and the system will become more and more sophisticated at being able to achieve that goal and sort of understand intent or policy-based kind of inputs. So you're not having to plug in every little piece of the puzzle for the system, but someone still needs to be interpreting real-world need, and being able to communicate that.
And also obviously selection of these systems, what is an appropriate tool for the type of problem that your business is trying to solve. And even within that AI space, you are going to have lots of different options, lots of different alternatives, some that are going to be fit for purpose for what your organization needs at this time and some that aren't, people don't need to understand the math behind it, but being able to understand what type of techniques are effective and why? What type of systems are effective and why within different contexts. I definitely think that needs to be an awareness that grows in terms of hard skills as well.

Jason Dover:
Makes a lot of sense. There's certainly a lot of considerations that need to be taken to play to be successful in the modern operations world. Kind of wrapping this up a bit. What would be your recommendations to customers? What's the Monday morning playbook for an organization that is looking to make some changes to the way they do operations, adopting some more modern principles and frameworks, maybe incorporating analytics, incorporating AIOPs, where do they get started?

Barry Gleason:
Mindset and culture before technology. Unless the teams have the proper view and the proper culture, there's no point trying to drive these kind of tools into an organization. Software has to be written from the start with observability in mind. The vendors and devices have to be considered in such a way with integration with other systems. And it's only then that we can move towards this model.

Benjamin Hodge:
Be really clear about what problem or problems you're hoping it will solve. I would be looking for non AI solutions to those problems first. Particularly, non machine learning solutions to them first, and really making sure that there isn't already solid rule-based algorithmic kind of approaches to it, because you are taking on a lot of sophistication and complexity in AI based systems, that doesn't come for free. And to Barry's point, it's certainly not going to fix substandard infrastructure, poor practices, poor observability attributes in your system. It's not going to cover those gaps and it's not going to solve the same kind of problems that traditional telemetry collections solve. You need to have some foundational stability and some foundational capabilities that AI can enhance rather than look at this as a replacement you expect to kind of just drop in and see an instant win.

Jason Dover:
That sounds like some great guidance in a great place to end. Guys, this was a great discussion. And yet we barely just scratched the surface. My key takeaways are as follows. A lot of transformation is happening in the world of operations, and this is being driven by the need to support greater change in network and application environments. Like you said earlier Barry, things just aren't static.
AIOPs has some real potential, but in the area of analytics, customers should be critical of vendor claims and make sure it's actually doing what it says on the 10. That's it, as these models are matured and start to get adopted and mainstream, they will help address areas such as explainability, shortening time to resolution and by extension, minimizing business risks within organizations. IT practitioners should be looking at how they can start to adopt these models within their domains as well.
Ben, Barry, do you want to leave your final key takeaways with our audience as well.

Benjamin Hodge:
Yeah. Thanks Jason. Definitely for me, the key points to take away from this is to be critical. Like you said, of claims. Definitely watch this space. It's an important space. It has a lot of opportunity, but there is still a lot of gaps there. And maybe start to get a broader awareness of some of the multiple techniques under that AI space outside of machine learning and become a little bit more aware of where machine learning has constraints and issues that could impact you. Because that's certainly the one that gets waved around the most often outside of the scope of where it's really going to help you.

Barry Gleason:
My takeaways will be, I think people should realize that analytics isn't just about more and more data and more metrics. It's about the right data and the right context at the right time. And the other thing will be, observability needs to be true out an organization. It can't just be something that's within the networking team or within the development team. This is something that has to span the organization for any of this to be successful.

Jason Dover:
Thanks a lot guys. This has been a great discussion, really enjoyed it and can't wait for the next time.

Barry Gleason:
Thanks Jason.

Benjamin Hodge:
Thanks Jason. Thanks Barry.

texture

Optez pour Progress Kemp dès aujourd’hui

Essayez Progress Kemp gratuitement

Facile à déployer et à configurer, avec un accès aux modèles de configuration et aux guides de déploiement pour être rapidement opérationnel.

Démarrer une évaluation gratuite

Nous contacter

Vous avez des questions ou vous souhaitez acheter ? Contactez un expert Progress Kemp qui saura recommander la meilleure solution pour vous.

Contacter le service des ventes