Achieving excellence in the complex reality of today’s network

Organizations are using Continuous Network Verification to move from reactive to proactive operations

The network is complex. Anyone operating in today’s distributed multi-vendor network, full of virtual overlays, containers, SDN, and cloud migrations, knows this. Under the weight of all this complexity, it is easy to wonder if excellence is even possible. And what exactly does excellence mean in this context? How do we achieve this lofty goal?

As a business executive and leader, I see excellence as the ability to bring out the best in our team in order to meet the objectives of our business. I continually ask myself, “How do I need the team to perform, and are we doing what is required?” Question asking and continual review are required for those building, managing and operating our networks as well. To get our networks on the path to excellence, we start with a deceptively simple set of questions: how exactly do we expect our network to be performing? Is it indeed operating to our expectations and meeting the objectives of our business? How do we know? Among our customers and network professionals, these are the sort of questions that can launch a thousand conversations. Seemingly simple, yes, but with an apparently unending array of answers, not to mention follow-on questions. Why is this?

The questions themselves are complex because the subject–the network–is complex. A lot has changed since networks took over the hearts and minds of society. We have seen unprecedented growth not only in everyday demand to connect people and business, but also in the technology and tools that make all this connectivity possible. Often the two forces of hyper-growth and innovation have worked against each other. To meet demand and required scale, we quickly adopt new technologies, which layers even “more stuff” into the network. All this “stuff” threatens the very network it was designed to support. And to worsen the problem, harried networking teams may fall into suboptimal practices over time as they “MacGyver” the network to create a synergy between these forces of hyper-growth and innovation. Teams are continually asked to do more, but with fewer resources and within shorter and shorter change windows.

Gartner recently released a research note cautioning companies to Avoid these ‘Bottom 10’ Networking Worst Practices. In the report, the research firm reminded readers that “Suboptimal network practices result in downtime, reduce network agility, waste human and capital resources and lead to suboptimal network investment.” This behavior is not only expensive but cumulative. Years of technological debt, acquired by doing what is tactically right at the time without looking at the long-term effects, will eventually come due, and the financial losses incurred by short-term planning multiply with each additional change to the network.

What drives this “suboptimal practices” pattern? We all know business today is conducted in a highly-dynamic, 24-7, “always on,” globally-distributed manner. Business requirements change early and often. Yet Gartner notes that many infrastructure teams are still looking to static, step-by-step approaches and frameworks to govern the IT Lifecycle. They are often slow to adopt new technology out of fear of introducing excessive risk, which is a response to recent industry experience and the persistent conflict between technology innovation and network availability.

Gartner calls this adverse reaction to risk among networking teams “changephobia,” resulting from an “if it ain’t broke don’t fix it mentality.” It is hard to imagine that the same group of people whose innovative spirit built the network, propelling years of progress, are now stifling it out of fear. But if you look for recent advancements in core networking technology, there haven’t been any that compare to those in other areas in IT.

It’s not all doom and gloom, however. There have been new and exciting developments in some areas of networking technology. For example, network verification built off of next-generation advanced algorithms is allowing teams to verify that the network is operating as we originally intended, and this is behind much of the hype around intent-based networks (IBN). We are now able to close the loop on our network designs and gain feedback that the network is doing what we originally wanted it to. And legacy approaches to project and program management, often overly slow to deliver, have given way to Agile and Lean approaches with a focus on close, cross-enterprise team collaboration, and quicker, more frequent delivery cycles.

Network verification is helping on two fronts. First, it allows teams to start “automating the easy” so they can then more quickly embrace new technology. On this front, Continuous Network Verification provides low impact automation that infers and verifies that the network is doing what it is supposed to. Second, this leads to a behavioral shift in networking teams. By incrementally verifying the network, we are removing the veil of complexity and clearing out years of technological debt. Organizations, now armed with a new level of awareness into the actual behavior of their network, can confidently embrace innovation and change.

To learn more, I encourage you to download our recent whitepaper, Achieving Operational Excellence with Continuous Network Verification, where we take an in-depth look at how legacy approaches are holding back the network. More importantly, we identify key ingredients to achieving excellence and explain how to do this continually throughout the next-generation IT lifecycle. I also suggest you read the Gartner report to identify any legacy behaviors that might be slowing down your teams and your network. The network isn’t going to get any less complex as we move forward, but we can leverage new tools and processes to achieve excellence even within the complex reality of today’s networks.