Is Networking Becoming Cool Again? Predicting the Future of Networking in 2018
Somewhere in the vicinity of 20 years ago there was a big wave of networking innovation, spurred by the internet age. It gave birth to huge networking companies like Cisco, Juniper, and Ericsson. Then, the pace of innovation just kind of stagnated. But, the recent acquisitions of Viptela by Cisco at $610M and the even recent more acquisition of VeloCloud by VMware at an estimated (whopping) value of $1.4B have changed the game, and they’re definitely a sign that virtualization is setting the networking market on fire again.
Watch our upcoming Cloudify 4.2 webinar to see what the new version is capable of! REGISTER NOW
During the time when data centers were undergoing the revolutionary transformation to cloud, the networking industry basically skipped this entire evolution.
Which begs the question, why?
From the Internet Age to the Cloud
Recently (starting approximately in 2011), the second wave of major transformation began to present itself in the networking industry, starting with major acquisitions like Cisco with Tail-f, and VMware’s acquisition of Nicira – leading to a world where the service and configuration model started to take center stage.
Networking, unlike compute or storage which are mapped into a single box, is much more complex and is comprised of many services (AKA functions), which can begin with the IP-tables, wifi and network cards in your operating system, and continue into corporate services such as firewalls, load balancers, routers, DPIs, VPNs and many more.
So virtualization of the network essentially requires a virtualization of the all of those functions, making the task of virtualizing the network significantly more complex, and slower to happen.
On top of these complexities, network services are extremely sensitive to performance. Time was needed for virtualization technologies and networking hardware to evolve and deliver acceptable performance through specific hardware acceleration such as CPU pinning, SR-IOV, and such. This kind of acceleration also had to be supported by cloud providers and their cloud platforms, which took still more time.
So…what’s changed?
As with many transformations there are a number of factors that must converge to move the industry through a major evolution. In this case these were the factors that accelerated network virtualization.
- Competition from web-scale companies—Google, Facebook and Amazon Web Services built their own networking infrastructure based on commodity infrastructure and software-driven networking. This was a critical catalyst proving that network virtualization can be delivered at massive scale while still delivering the necessary performance through software-driven network services and commodity hardware. A great evident of that can be seen by the project and case studies presented in Networking @Scale 2017 event which showcased many of the open networking project that are developed by many of the web scale companies.
- Cost pressure on carriers—It’s no secret that carriers are facing huge cost pressure. While they have an increasing demand to expand their network bandwidth, they have no way to monetize on this investment. Therefore, they are forced to reduce their cost of operations as an immediate mitigation plan.
- Business disruption—Netflix, WhatsApp, and Skype provide over-the-top network services at a fraction of the cost and disrupt the core of carriers former revenue sources.
- Technology maturity—Virtualization technologies, coupled with hardware accelerators, now make it possible to run network services with high performance and predictable latency, just as one would with dedicated hardware.
- Market demand—Manual network management worked on a centralized and relatively static network environment. The move to cloud and multi-data centers leads to a more dynamic network environment. Therefore, manual practices are becoming unmanageable and costly.
- Emerging startups—New entrants have played an important role in this transformation. Unlike incumbents like Cisco and Juniper, startups play the disruption game, bringing new, software-only approaches built for the post-cloud era.
The Network Virtualization and Orchestration Market
Different reports show that the network virtualization market is expected to reach the hockey-stick in 2020 with $12.4B in NFV software, while edge computing is expected to drive faster and bigger growth and reach to $19.4B by 2023.
The Cloud Orchestration market is expected to grow at an even a faster rate according to the following Cloud Orchestration Forecast and Multi-Cloud Management Forecast by Market and Markets.
The cloud orchestration market is estimated to grow from $4,950.5 million in 2016 to $14,172.5 million by 2021, at a CAGR of 23.4 percent. The key forces driving the cloud orchestration market include growing demand for optimum resources utilization, increasing the need for self-service provisioning, and flexibility, agility, and cost-efficiency.
The multi-cloud management market size is expected to grow from $1,169.5 million in 2017 to $4,492.7 million by 2022, at a CAGR of 30.9 percent.
It’s also interesting to see the strong correlation between the multi-cloud management market and the telecommunication business which is expected to be the biggest vertical in that segment.
Telecommunications and ITES is one of the most significant verticals in the multi-cloud management market. Multi-cloud services and solutions are used in this vertical for various on-demand services, depending on the Call Detail Records (CDRs).
![]() | ![]() |
The worlds of DevOps and network virtualization are converging. The same concepts that brought about the DevOps transformation in your typical enterprise are now changing the networking industry. The same forces are at work here: the need for more lean and agile rollout of services, automation and orchestration at the core, software-defined everything, self-service, open source, and provisioning of on-demand and dynamic services. These are the new networking reality. The stars are aligning to bring new DevOps concepts into the world of networking, at scale and more openly.
Let’s dive into the actual evolution the market has undergone to align these stars.
The Network Virtualization Future
So far, I have discussed how the network has become cool again, being at the center of the transformation of an entire industry moving towards agility, self-service, automation and web-scale. On top of this, all these transitions are largely based on concepts of open source, that had the power to change entire industries – from the operating system, through the mobile market, now all the way down to the networking layer.
All this didn’t happen overnight, and there was a gradual transition to make this transformation a reality.
As 2018 dawns, it’s a good time to see how the network virtualization world is evolving. This, to me, can be broken down into three “generations” as I see them:
First generation: Network virtualization
The first stage was led by ETSI and defined the general architecture for enabling network virtualization. The architecture was adopted by many carriers and played a key role in setting the market, creating a common taxonomy of the key layers.
This ETSI definition introduced the concept of orchestration as a key component in the network virtualization architecture, as outlined by ETSI MANO which is separated from the infrastructure (VIM) and OSS/BSS.
The challenge with first-generation virtualization is that it forced a fairly big change on how carriers operate. This was a big undertaking not just technically but also culturally.
That made the adoption of NFV extremely slow, especially by second- and third-tier carriers who couldn’t afford the investment required to make the transition.
A bigger challenge was the fact that all the standardization efforts behind ETSI NFV were for the most part led by the same vendors who didn’t really have an incentive to drive such a big transformation, which ultimately would result in major cannibalization of their own core business.
The result was that many of the NFV players ended up using the ETSI model mostly as a way to sell the same thing but dressed up in modern technology, but the actual product – and more importantly, the business model – didn’t change much to fit into the cloud world.
Second generation: The move from network appliance to a network service
Second generation network virtualization is led mostly by new startups such as Meraki, Viptela, VeloCloud and Versa. These startups brought a more narrowly scoped solution for specific and common use cases such as SD-WAN and vCPE, repackaging formerly complex problems into new and exciting products that deliver an excellent user experience and are easier to consume.
The key to their success was that rather than changing the entire networking backbone as a first step, these services were offered as new, over-the-top services.
vCPE and SD-WAN solutions first target moving from manual setup and configuration of the WAN network to self-service network management of all the network configurations.
SD-WAN Self Service Management of branch office network management
The key challenge with this approach is that we’re trading one closed source solution for another. Most of these solutions are still fairly proprietary, and come with their own set of network functions, their own flavor of protocols, and management systems.
The move of network services to a software-driven model makes it accessible to applications and developers. It is expected that the control of network infrastructure will follow a similar shift to be more application controlled, similar to how this happened with compute and storage infrastructure.
Current SD-WAN/vCPE solutions were not designed for DevOps processes. Rather, they were targeted mostly for network operators. Today we are looking towards more ecosystem partners that will drive these processes and provide dynamic management layers to enable best-of-breed networking stacks.
Third generation: Cloud native and the move from Software-Defined Networking (SDN) to Application-Defined Networking (ADN)
The move to cloud-native networking will commoditize network devices including CPE devices, and we’re already seeing open CPE alternatives as covered in Nikos Andrikogiannopoulos on the x86 based CPE devices such as pfSense. In addition, the Open Compute Initiative (OCI) is driving a proposal for an Open uCPE, which a list of network vendors already agreed to support.
Other network services such as Quagga, and Calico as well as Metaswitch Project Clearwater are another example of new, mostly open-source and cloud-native network services.
In this cloud native world, the control plane moves from the device to the cloud, thus transferring some of the heavy lifting from the device to the cloud, further reducing the cost of the device itself.
This moves the shift of value from the network device into the control plane, itself as pointed out by Andrikogiannopoulos:
“By transferring control plane functionality in the cloud one can lower CPU/RAM requirements and do the heavy lifting on the cloud side. This SDN/NFV approach will allow faster delivery of new functionality/services at lower cost. Services like NAS storage, parental control, VPN/cloud gateways, CDN functionality, etc. Firmware upgrades can become as easy as iPhone upgrades.”
In addition to commoditization of the network devices and the move of the center of gravity to the cloud, I expect that networking will be driven by the application, and not just by network operators.
This move has already happened with compute and storage infrastructure, and it is fair to say that today most of these resources are software-driven.
I see a number of reasons why the network should follow a similar path, like:
- The fact that networks should be managed as part of the lifecycle of the application. Today, when networks are managed separately from the application, we see lots of firewalls, ports, load balancer rules that are left open even when the application that needed them doesn’t exist anymore or has changed. This needs to change.
- Likewise, the move to multi-cloud and edge computing is forcing a more dynamic and ad-hoc network environment where it isn’t even always possible to know which network needs to be created beforehand.
The Security Case for Application Defined Networking
Network security is applied today mostly as an afterthought. Many of the network security products were designed to identify malicious behaviour by tracking packet flows. In addition, a central firewall can’t be the sole gatekeeper especially when we’re moving to multi-cloud or even more distributed environments such as edge computing.
Instead, what is needed is the ability to create micro-firewalls that will be created per application and ensure that only the right set of services will be exposed to the outside world. Such definitions will follow the application as it moves from one environment to the other, or be deleted when they are not needed anymore.
Application Defined Networking
I expect that the move to Application Defined Networking to have a similarly disruptive effect on network operations as we’ve seen previously with the effect of DevOps on data center management. This shift will also include a shift in power from the network operation to the application owner.
The Move to Open Networking
These three phases in network virtualization have created an industry ripe and ready to embrace more open solutions that have industry-wide adoption, creating a more standardized method of operation and best practices that will be enable carriers to remain relevant and adopt new technologies more quickly as clearly indicated in the 2017 Open Source in Networking Report.
The open networking promise is more real than ever, and I will dive into how the industry is converging around new and exciting projects to help deliver real world implementations that are applicable at the scale and distribution required today from multi-cloud to edge computing.
The Open Networking Promise
Networking sits at the heart of any internet service, and therefore the fear of being locked in on the networking layer is well justified.
Carriers have already identified that risk, and in an unprecedented move, joined forces to create an open source movement – currently comprising more than 50% of the global subscriber-base and growing – to provide a greater degree of openness of the control plane i.e. the orchestrator.
Cloudify has been a pioneer in driving the open source disruption in the network orchestration domain, and continues to demonstrate our open roots by joining ONAP as a founding platinum member. As ONAP is bringing together all of the leading carriers under one umbrella, vendors and system integrators alike, a much greater force has been created that holds the promise of driving the adoption of open source at an even faster pace.
ONAP Adoption by membership
We are already seeing the effect of ONAP on standards bodies which are now aligning themselves with ONAP.
*To set the expectation right – while the ONAP ecosystem is already in place and working, I expect that ONAP as a product will go through the regular maturity cycle as we’ve seen with other open source projects of this size, so the first release should be treated as more of a first milestone release than a product.
Real Life Examples
Open source has always faced skepticism on whether open source projects can reach production-grade maturity that can support large-scale services.
The best answer to the skepticism is always real life examples. During the last quarter we’ve seen public announcements from carriers that are using Cloudify to drive their next generation vCPE and SD-WAN:
Redefining NFV
The emergence of pure-play orchestration players such as Cloudify, and the new open source initiatives such as ONAP, have brought a new way of thinking on how to address the transformation challenges created by the first generation of NFV (see above: Network Virtualization).
I referred to one of these approaches as the “Orchestration-First” approach. The orchestration-first approach focuses first on automating the management and configuration of the current & existing network infrastructure, gradually introducing new virtual network services and infrastructure components.
I’m happy to say that this is now “battle proven” and running in production.
Open vCPE / SD-WAN in production
This approach led us to many of the new customers wins, and recently gained top analyst recognition – including the Network Transformation Awards: Best vCPE/uCPE Enterprise Service.
Analysys Mason also recently released a report on this approach we developed and implemented at Partner Communications titled “Partner Communications: adoption of NFV-based on an ‘orchestration-first’ vCPE and SD-WAN solution”.
The most important part about this development is that it lights the path for reaching the true promise of NFV without compromising on narrow use cases or lock-in.
It also demonstrates how existing SD-WAN vendors can deliver an open solution based on open-standards and open-source, and thus provide the benefits of having an end-to-end, pre-packaged solution without the lock-in. And it reduces the barriers as well as time to market for new vendors and system integrators looking to deliver similar solutions.
More on how third generation Open vCPE compares with second generation, closed (proprietary) vCPE and SD-WAN in an upcoming post.
The Future Belongs to the Edge
Many edge network devices such as CPE, set-top-box, wifi router, as well as your smart home, cars, trains, airplanes, and manufacturing sensors, are turning into miniature clouds that can run standard software packaged in containers.
There are many indicators that edge computing is growing fast. Gartner predicts that 8.4 Billion Connected “Things” Will Be in Use in 2017. Forbes predicts that edge computing will Redefine The Enterprise Infrastructure. Andreessen Horowitz describe edge computing as the end of cloud computing as we know it. And, as we noted earlier, Market Research Future predicts that the edge computing market will reach up to $19.4B by 2023.
Challenges Orchestrating Next-Gen Edge Platforms with Current Cloud / NFV Platforms
Managing edge devices introduces a unique set of challenges, starting from the ability to manage millions, and potentially billions, of devices. The distributed nature of edge devices across the globe introduces latency, security, and network challenges.
In addition, many of the edge devices are constrained in terms of footprint. Mobile devices (cars, planes, ships, etc) are also faced with network connectivity and bandwidth issues.
That led many of the existing edge platforms such as GE, Tesla, and others to build their own cloud orchestration and management layer as it was clear that existing platforms were not built to overcome those challenges.
Having said that, many of the requirements I mentioned above are fairly generic. The process for delivering a new software update to a Tesla isn’t going to be vastly different from that of a ship, vCPE, or other IoT device.
This led to a new generation of orchestration and management solutions that will fit that model. A model that provides a generic orchestration and management platform for this environment.
In a previous post of our titled “The Birth of an Edge Orchestrator,” we detailed our method for overcoming the challenges posed by a specific customer that manages a mobile fleet over a satellite network environment.
In a follow-up post, I will describe, in more generic terms, what I believe should be the next generation edge management and orchestration platform.
Stay tuned!
Pediroda says:
Brilliant !