Driving Open Standards in a Fragmented Networking Landscape

This article originally appeared in SDxCentral on December 29, 2017.
Once upon a time, standards were our friends. They provided industry accepted blueprints for building homogeneous infrastructures that were reliably interoperable. Company A could confidently build an application and—because of standards—know that it would perform as expected on infrastructure run by Company B.
Standards have, unfortunately, somewhat fallen out of favor as the speed of digital innovation has increased. Today, innumerable software applications created by innumerable developers at an accelerating pace. Standards—once critical for achieving interoperability—have failed to adapt in this brave new world.
And while the speed of innovation has accelerated, the speed of adoption has remained essentially unchanged. Many organizations adopt new technologies but don’t have the time to transition completely from their legacy infrastructure models to new ones. Over time this leads to a diverse array of technology silos. Some are different because of the programming languages on which they’re built: Java, Python, Ruby, Go, etc. Some differ on the cloud infrastructure management platforms they use: vSphere, OpenStack, AWS, Azure, Google, etc. And another diversity vector is the compute paradigm: containers, VMs, bare metal, etc. Worse yet, thee silos complicate getting answers to simple operational questions like how much it costs to run a certain application, who is running which applications and where.
The number of permutations is overwhelming, and each has advantages and disadvantages for different use cases and business purposes. Unfortunately, using standards as a blanket approach to drive cross-platform compatibility and interoperability simply doesn’t work in an environment where change is happening so fast.
The telecom industry, for example, is very standards-driven. Over the years multiple groups have been formed to develop standards for specific elements of the telco stack. The most notable ones are ETSI, MEF and TMForum. The challenge with this approach is fragmentation among these projects. Silos and a lack of interoperability complicate finding a way to agree on an end-to-end standard that will be consistent across all the layers of the stack, even for an industry where the applications are relatively similar.
As application portfolios in telecom have becoming more diverse, operators have coped by buying turn-key, proprietary solutions from single vendors. This no longer works as well as it once did, because there simply isn’t one vendor or single solution that can cover all of the needed use cases. Thus, barriers to interoperability are creeping in.
Fortunately, over the past few years, we have begun to see an emergence of many open source projects—OpenStack, NoSQL, Docker, Kubernetes, ONAP—that are becoming an alternative to those standard bodies and are shaping the new telco networking stack. Open source provides a more agile alternative for driving de facto standards, where adoption becomes the main measure for success.
Let’s take a look at the enterprise experience as an example. Enterprises were standard driven up until a decade ago (to wit: SQL, OMG, Java EE). Today this is no longer the case. Standards that were led by standard bodies have been replaced by open source projects which have been deemed de-facto standards by virtue of their widespread adoption.
Open source standards have numerous positive attributes. First, the process is more democratic, because every developer can participate and contribute. Second, the influence of political agendas is minimized. Finally, the process is significantly more agile and responsive to innovation. The code is king, and there is no need to arrive at a full consensus to make progress. Furthermore, whereas specifications can be interpreted in many different ways and are therefore never enough to ensure compatibility, code provides a single source of truth and, by definition, ensures interoperability.
But open source standards are not without their challenges. Often, there is very little interoperability among the different open source projects, which leads to the very silos that we’re trying to eliminate.
Jonathan Bryce, executive director of the OpenStack Foundation, recently called out this issue and made it the theme of his keynote address at the OpenStack Summit last month in Sydney, Australia. He said, “The biggest problem in open source today is not innovation; it’s integration.”
A recent SDxCentral report on “2017 Open Source in Networking Report” provides a fairly detailed coverage of the topic and specifically the relationship between open-source and standard:
“The familiar waterfall model breaks down for software-centric solutions, especially as the update cycle is converging on continuous, and systems are designed to be tailored into distinct environments. A more iterative lifecycle is needed that blends specification with implementation and accelerates the overall process. While a radical change is necessary, the end goal remains the same: multi-vendor interoperability.”
So how do we find a happy medium and claim the benefits of both open source and standards in order to insure integration and, ultimately, scalability?
See multi-cloud + container orchestration in action in the Cloudify Lab!
Open source should drive standards, not the other way around.

To illustrate this point, let’s compare two approaches–the first, standard driven; the second, open source driven.
Standard driven: ETSI had a very important role in the NFV industry. It defined a common architecture view on what an NFV system should look like, and it created a common taxonomy. However, the actual products that claimed support for this architecture were vastly different from one another, and there was no real compatibility or interoperability among products, even though they all claimed ETSI support.
Open source driven: ONAP is taking a different approach, using open source as a vehicle to lead for common standards. ONAP first took an open-source operator point of view to define the architecture. It is now taking the relevant parts from different standard bodies and integrating them into the architecture. Indirectly, this forces a stronger collaboration by the different standard bodies, which are now aligning themselves with ONAP.
2017 State of Enterprise Multi-Cloud Report
As a comparison we can see the difference between the scope of ONAP vs the scope of ETSI:
ONAP is covering the end-to-end architecture where ETSI is limited in scope.
Secondly, we should define “just enough standards.”
Standards should focus on interoperability among the various open source projects or cloud infrastructures and less on implementation. We still need standards, but the scope of standards needs to shift from defining the underlying architecture through low-level, detailed specifications to “just enough standards” to ensure interoperability among projects that do not need to conform to the same standard or API.
We should also allow integration and interoperability among standards or frameworks that are already used rather than trying continually to look for the new standard.
A good analogy for this can be seen in the manufacturing industry. Boeing was able to scale its 787 Dreamliner manufacturing pipeline by distributing the manufacturing of each sub-system to different manufacturing plants across the globe. In order to do that, the plants had to agree on some degree of common modeling to describe the different parts. This allows Boeing to assemble those parts as if they were all manufactured in the same plant.
We need to do the same thing in the IT industry. We need to move away from trying to define the implementation details of each part to defining a “just enough standard” to allow the industry to interoperate at the sub-system level. So, rather than dealing with how we spawn VMs or configure a specific network device, we need to focus on interoperability among systems and services. In this paradigm, at the end of the day the most important role of standards is not avoiding lock-in (as it has been) but rather providing a higher degree of abstraction to enable enough interoperability to allow automation at scale.
More specifically, with “just enough standards” we focus on:
1. Interoperability, rather than standardizing on the implementation.
2. Abstraction of the requirements and allowing flexibility on how the requirements are met (conforming to the same API is not needed).
3. Minimizing the differences and providing a structure to map them in a consistent way rather than trying to hide the differences.
The TOSCA project provides several good examples of “just enough standards” in action. TOSCA provides a fairly loosely coupled modeling which can be extended easily to fit specific project needs.
- Example 1: Multi-cloud interoperability
TOSCA takes a brokerage approach to allow interoperability without compromising on the least common denominator. In this case, we standardize the definition of the demand but keep the “supply” open to implementation. This provides a high degree of flexibly for interoperability between a given demand and the various resources that can meet that demand without having to force those resources to conform to the same API as a prerequisite. - Example 2: TOSCA / YANG
TOSCA is a specification for handling the application lifecycle on a cloud environment. YANG is a specification that is commonly used to define the configuration of network devices. Rather than trying to extend TOSCA or YANG to cover the missing part that is covered by the other language, we can integrate the two and keep them independent of one another. We can use TOSCA to create the application and manage its lifecycle and use YANG to configure the actual device. In this way, we get the best of both worlds. - Example 3: Service chaining
TOSCA supports interoperability between network services that run on different environments (e.g., Azure and OpenStack) as well as different orchestration engines (ONAP and Azure ARM). In a recent project, for example, we launched two instances of Fortigate—one on OpenStack and another on Azure. We then created a service chaining between those two services through Cloudify by sticking together those two instances through a common TOSCA model.
The bottom line is that we need to accept that “the only constant is change.” Innovation in software can bring many good things, but we need to learn how we can eliminate the silos, guard against new ones forming, create better interoperability, and simplify operational complexity. The examples above show that by taking a programmatic approach to standards, this degree of interoperability can be achieved even today.