Home Cloud Gartner Catalyst 2016 Conference – Day 2

Gartner Catalyst 2016 Conference – Day 2

Gartner-Catalyst-Conference-2016-with-KEMP

Mark Zuckerberg is quoted as having once said, “The biggest risk is not taking any risk… in a world that is changing really quickly, the only strategy that is guaranteed to fail is not taking risks.” This is especially true for enterprises today who are adopting emerging technologies and architecting new approaches to enable greater agility and transition to an on demand digital business model.

Cloud Risk and Preparing Networks for Containers

These concepts cannot be discussed without the topic of public cloud computing coming up. This is not hard to understand given that cloud most often serves as an integral technological enabler that makes them possible. However, moving from the safe hands of a meticulously architected, hand crafted, triple replicated data center architecture to an open, multitenant public cloud environment provided by a 3rd party does come along with risk. Elias Khnaser classified these risks in one of his sessions yesterday in 5 key areas:

  • Availability & resilience
  • Security & access
  • Data protection
  • Liability
  • Provider management
  • Legal

On the matter of availability, he highlighted the fact that a clear understanding of verbiage used by different vendors is essential. For instance, Azure has availability sets while AWS has availability zones. Even though it would seem that the two terms could be interchangeable, they are very different since an Azure availability set essentially establishes high availability within a data center while an AWS availability zone delivers resiliency intra-data center – big difference.

In regards to liability it’s not viable for either the customer to assume all or for the cloud provider to do so. A shared model is needed and customers do well to consider a multi-vendor approach to minimize exposure. While customers will likely have little success at making significant changes to the terms and conditions that a provider is willing to agree to, they should demand absolute transparency to understand exactly what the level of liability a provider is accepting as well as the insurance levels that they carry in the event of catastrophic issues. In the even that this isn’t or can’t be clearly provide, run!

Elias also focused on the need for a solid exit strategy from the public cloud. While no one goes into an application migration to cloud with an intent to go backwards, stuff happens. When unacceptable service levels become a norm when an announcement is made that one of your providers is about to shut down is too late to start planning. The last few years gives us several examples of the latter case with Verizon and HP both giving customers just about 2 months to move their applications and storage provider Nirvanix giving a mere 2 weeks when they decided to EoL their cloud services. Moving applications from a public cloud you’ve built on won’t be as easy as flipping a switch but a strong exit plan that includes a quality decision and event assessment framework, pre-evaluated sourcing decisions and clearly tested and documented methods for migrating applications and underlying data will make it easier if it becomes necessary.

Resident Gartner expert on topics ranging from cloud and network automation to NFV and SDN, Simon Richard, gave a talk covering the depth and breadth of the networking challenges associated with delivering an environment that can efficiently and optimally support containers. Of key interest was the cultural changes that need to happen to make this possible with I&O and development working more closely together to gain a complete picture of requirements at various stages of application lifecycle, enabling each other for success.

Intelligent IoT – It’s All About the Analytics

A key value IoT brings is the ability to extract analytical insights that can drive business process decisions. IoT architecture is normally expressed in 3 parts; the edge, platform and enterprise… As would be expected, the edge is where sensor devices exist. With all of the data available to ingest at the edge, Carlton Sapp recommended taking an approach of filtration at the to reduce the noise and get to quality relevant data more quickly. As an example, you may not want to know about every car that passes through an intersection, just the blue ones. Or you may not want to know what the temperature in a conference room every 5 minutes but just when it’s breached a threshold that is going to result in a decision – wear a jacket to the next meeting or not. With today’s technology maturity levels, the challenge that arises is that compute is normally limited on edge devices meaning that real-time speeds likely need to be sacrificed in order to perform this type of computation. Depending on the type of data in the stream, however, this may be acceptable. Regardless of where and how this is done, the key takeaway for organizations driving IoT initiatives is to use a systems approach to harvest value” from the data that is available and demonstrate this to business as early as possible in an IoT project.

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy