In Part 1 of this three-part series, I detailed why TOSCA is winning the open standards war for cloud application deployment and orchestration as well as the issues with extending TOSCA and how to understand which NFV specification, if any, is needed.
ETSI NFV cannot be the answer to everything NFV
Since ETSI NFV specifications are the most coherent set, why not just simply embrace them, implement them, and deploy accordingly? For that, there is a quite simple answer: because most of the ETSI NFV specifications were never developed with the intent of becoming normative specifications but rather with the intent to provide an inside-out NFV-centric view/study. While they may provide a jump-start towards a workable implementation for a greenfield (NFV) environment, it is quite difficult for anyone to argue that implementing them will result in an optimal implementation (even just for NFV).
This topic in itself deserves a separate blog, so for today it suffices to mention that self-imposed constraints (divorce from FCAPS, reluctance to decompose the VNFs, etc) and a bottom-up approach have rendered the ETSI NFV information models and specifications less than optimal. Adding to this the normative aspect of the specifications have practically imposed an additional stovepipe (NFVO/VNFM/VIM) to the management of networks, at the time when removing some of the existing stovepipes was needed for service agility and automation.
TOSCA Videos for Beginners – Learn TOSCA Today! WATCH THE VIDEOS
Even when looking solely within the new stovepipe, we can see the effect of the artificial constraints imposed. Here is one example: the modeling of Virtual Network Functions and Network Services. A (network) function is a (network) function is a (network) function. Both VNFs and NS are network functions providing some network services. There is absolutely no logical reason for them to have completely different information models, and as a result, completely different descriptors, instead of one inheriting from the other.
Once one realizes that, as a concept, a NS and a VNF are similar, they can rely on the same information model. Once they rely on the same information model, they can share the same type of data modeling, the same type of lifecycle management model, and the same types of interfaces. The entire network function/service becomes a continuum, implementation is simpler, automation is more easily enabled, and service agility becomes a realistic endeavor. Perhaps it is not too late for ETSI NFV to reinvent itself, and if not, it is certainly perfectly fine for NFV-impacted open source communities to do a little better.
Another example of open source influencing standards: OSM is following, relatively closely, a good portion of the ETSI NFV specification, and their work already resulted in valuable feedback with regards to what needs to change, even considering the same constraints mentioned above. To ETSI NFV’s credit, they are addressing that feedback. I do hope that ETSI NFV will at some point reinvent itself – by de-emphasizing the normative aspects of many of their specifications, in particular their information models, and shifting their approach from “NFV as the center of the Telco universe” to “NFV as a small, critical portion of the Telco universe.”
ONAP – the big Telco hope?
What about ONAP? Well, it is probably the “big Telco hope,” and only time (Release 2 and beyond) will tell if the hope will materialize. I say ”probably” because from the Release 1 work in ONAP it is too early to tell how “the real ONAP” will look; the focus is on integrating existing open source code from different sources, driven by VoLTE and vCPE use cases, each taking a different path in order to “make it work.”
I say “hope” for several reasons, among which I will mention two: first, the fact that in its target architecture they seem to take a top-down approach not driven by NFV, but rather by operators; requirements and vision (in particular AT&T) and second, the fact that they are not religiously accepting any complete set of specifications, but rather pragmatically adopting and adapting individual specifications, and making long-term decisions along the way – e.g. that TOSCA will be DSL for VNFs and Network Services description.
ONAP has a unique opportunity in Release 2 to define a long-term strategy with requirements that drive long-term architecture, that drives its information model, that drives its data model, that drives the TOSCA DSL dialect needed to meet ONAP requirements – while tactically it incrementally develops code using whatever TOSCA DSL dialog is handily available and most compliant to Simple YAML, so that bottom-up and top-down approaches converge at some point.
But for that to happen, all contributors to ONAP must understand the ONAP is much more than NFV, and certainly much more than ETSI NFV constrains NFV to be.
The ONAP vision of automation, rapid service introduction, open standards, and an implementation that an operator can go to production with requires the bottom-up “code is king” approach to keep converging with a long-term strategy – and we cannot get there if we are solely focused on the 25% portion of the requirements and architecture that are directly NFV-impacted.
ONAP is not (ETSI) NFV: scope is much broader in ONAP
And because ONAP’s goals and scope are different than those of a standards community, it is ONAP that needs to rise to the challenge to drive open standards that serve the project, rather than being driven by standards that were developed with a specific domain-centric perspective.
In the third and final part of this series, I will discuss the dangers of TOSCA divergence as well as where ONAP can step in to become the long-term standard. I also give my take on where the future of TOSCA should go and if we should even have an NFV specification.