Home Events Gartner Catalyst 2016 Conference – Day 1

Gartner Catalyst 2016 Conference – Day 1

0 comment

Gartner

One of the many parts of my role leading product at KEMP that I enjoy is getting a chance to engage with customers, partners and analysts at conferences throughout the year. It always serves as a good reset to get out of “heads down” mode and interact with peers that are following and making the trends in IT today. The fact that Gartner Catalyst is in sunny San Diego again this year doesn’t hurt either!

On-Demand Digital Business Transformation

Yesterday was a great start to the conference with the tone being expertly set in the opening keynote by Kyle Hilgendorf, Kirk Knoernschild and Drue Reeves. The big theme is how to architect and leverage technology for on-demand digital business transformation. Because of that, the week is packed full of sessions on IoT, planning for the scale of billions of connected things, using cloud to help mitigate attacks against an expanding attack surface and of course, containers. Kyle, Kirk and Drue highlighted that with the new ways technology is being applied, there is an intrinsic need for capabilities to sense and adapt in real time based on individual events as well as near real time based on aggregate data.

As an example, an autonomous car needs to brake in milliseconds without sending queries to a backend and waiting for a response as we’re familiar with in traditional system architecture. Aggregate data may include whether service inputs as well as telemetry from vehicles ahead in traffic that are engaging their traction control systems, indicating icy conditions and resulting in an action that has a meaningful positive impact in your vehicle. However, dealing with these type of workflows and the growing number of connected things at scale can be challenging with traditional infrastructure planning principles.

These normally center around building for the maximum peak expected usage over a period of time, often 3 years. The problem with this is that people aren’t normally very good at predicting the future, resulting in troughs, where usage is way below the planned capacity equating to waste, and peaks where utilization has exceeded what was planned, resulting in user experience issues. In order to have a reliable, elastic environment that can handle unplanned scale, you need to “build for the trough but design for the peak” which allows for expanding beyond what may have originally been imagined in terms of utilization, while minimizing waste.

Orchestration & IoT

Related to scalability, another key theme that permeated most sessions was the importance of orchestration which allows for scale events to happen autonomously by responding to real-time input feeds and analytics. Orchestration is only as strong as its weakest link which means that quality end-to-end logic is needed. This is especially true in the case of IoT where inconsistencies at low data rates can have exponential impacts at volume, as brought out by Paul DeBeasi in his session on planning and driving IoT programs.

Paul’s session had a ton of gems for organizations that are looking to. Key takeaways:

  • The importance of the IoT architect and strong systems engineering can’t be underestimated
  • IoT governance priority establishment early on will help prevent big failures later
  • Prototyping to demonstrate business value early is critical
  • The best IoT initiatives starts with simple idea with clear business value

Paul and others also noted the fact that in order for IoT initiatives to be successful and the transition to on-demand digital business to actually happen, cultural changes are needed by way of non-traditional collaboration. As an example, one case study focused on a utilities company going through a modernization that was able to be successful because of Operations starting an ongoing conversation with IT that now allows them to deliver smarter and more efficient services to their customers. The same goes for IT collaborating with sales and marketing in the typical enterprise.

The Cloud Landscape

From a Gartner perspective, AWS is seen as the best IaaS provider at the moment according to their scoring system. Still, it was highlighted that their being the leader doesn’t automatically mean they are the best IaaS provider for you with Azure also dominating a lot of conversations. Elias Khnaser noted that Azure is typically the best choice for customers that are already Microsoft-centric or have integration and authentication use cases that can be satisfied by Azure Active Directory which has significantly matured in terms of feature depth and breadth since its introduction. On the IoT front, Azure also has some advantages in terms of slick data visualization as well as support for protocols other than MQTT such as AMQP and CoAP. By and large, Google, VMware, IBM and others are seen as having specific value positions for unique cases, but not for everyone. Oracle and Alibaba are early in their IaaS journey but are expected to be seen on the radar in the future. All of this withstanding, expectations are that a multi-vendor approach will almost always be the right approach for enterprises.

Looking forward to day 2!!!

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy