- July 26, 2017
- Posted by: Shay Naeh
- Category: Uncategorized
Many claim that it’s the end of the cloud computing [Peter Levin] as we are familiar with it today. That doesn’t mean that the cloud world will end tomorrow, but it means that cloud computing is going to change drastically from a centralized cloud to many distributed autonomous local clouds, or, move to the edge.
The motivation for such a move stems from the fact that the requirements are evolving at a very fast pace – IoT, connected cars, smart cities, connected airplanes, ships, drones, etc. Lots of data is generated
at the edge and real-time decisions need to be made locally, and not in one centralized place.
The central cloud still exists, but it serves as a “learning center” where data analytics is gathered from all the local edge clouds so that interesting events and applied actions can be shared among the local edge clouds.
Here is a talk that I gave no this topic at the OpenStack Day Israel 2017 event.
There are many other names for edge cloud such as cloud in a box, datacenter in a box, edge data center, uCPE as an edge cloud, cloudlets and many more to come.
Read the full vCPE/SD-WAN White Paper. Go
Amazon (AWS) is coming out with their own version of an edge cloud enabler – Greengrass – which provides embedded Lambda serverless compute for connected local devices.
More and more examples of cheap edge data centers built on ARM processors or even Raspberry PI are emerging. Probably the biggest challenge is moving to the orchestration layer of the solution, where the edge clouds need to be deployed and managed.
Orchestration of an Edge Cloud raises a few challenges:
No network connectivity – autonomous operation
In many cases the edge cloud needs to operate without network connectivity to the centralized management center, caused either by long latency or connectivity problems. Local decisions are made locally, and when connectivity exists it can talk with the centralized management center.
We can’t necessarily count on a static data center. Since edge clouds may be located in moving vehicles such as airplanes, ships, cars, etc., we need to take into account communication between a geographically moving edge cloud to one or more central data centers. Think about it as a mobile user moving between cell towers.
Resources at the edge are scarce and you can find yourself short of CPU, RAM and persistence storage. To overcome this you may want to run containers instead of VMs with a lightweight orchestrator.
Security is a crucial issue. You don’t want malware to penetrate one of your local clouds, taking it down, or worse, infecting additional local clouds or the master cloud. You need to define and enforce strict security policies for access control, who can access which resources and inter-cloud communication.
Bandwidth at the edge can be limited or very costly. Latency can incur long RTTs (round trip times).
Where do you keep the hundreds of millions of data points collected? Local clouds are scarce on resources. Do you apply aging techniques, roll-outs, etc.? A local cloud edge platform needs to take care of these requirements.
The orchestration of many edge clouds at scale is challenging. How do you manage all the edge clouds? Moreover, how do you monitor and collect KPIs from millions of edge objects?
The local edge orchestrator should support self-healing, zero human intervention scenarios.
A service could be wholly contained in a local edge cloud, and this alone requires its own local orchestration. But what if a service spans between a local cloud and a master cloud where you deploy common functions for multiple local clouds at a central point? A service composition needs to be defined and executed. Service composition design patterns need to be created.
A LightWeight Edge Platform
A modular, lightweight, low footprint edge platform needs to answer all the above challenges.
Figure 1 above presents a Master Orchestrator that “speaks” with a local edge orchestrator.
The local edge houses an autonomous lightweight orchestrator that utilizes the hardware resources. It can support a virtualization layer or run on a bare metal machine. It can work with a container orchestration system like Kubernetes or orchestrate the box directly.
The local edge orchestrator can support local scaling and self-healing scenarios.
Cloudify is an edge orchestrator based on ARIA and modeled according to the TOSCA standard (Topology and Orchestration for Cloud applications) which is good for describing topologies and relationships.
Cloudify’s edge orchestrator has a minimal footprint (memory, cpu utilization, network and storage) and doesn’t need cloud endpoint APIs like a local OpenStack would.
An Edge Orchestrator to Master Orchestrator Interaction Model
There are multiple models where an edge orchestrator can interact with a master orchestrator:
- A simple model where the cloud edge runs a local controller, but tasks are received from the master orchestrator, as shown at the upper right part of Figure 2 below.
- A delegation model that supports autonomous execution of the Cloud Edge, as described on the bottom right part.
While the first model is simple and can support relatively simple distributed tasks, the second model is the more interesting one and can support autonomous cloud edge operation as described earlier.
The actual communication between the master orchestrator and edge orchestrator can utilize gRPC, which is characterized by lower latency (faster connectivity) and bi-directional communication.
A service, which is composed of multiple application modules can be contained as whole in a local edge cloud or span an edge cloud and a master cloud. A good example is a vCPE/uCPE use case commonly used by service providers. The application modules are VNFs such as a virtual firewall (vFW). Some VNFs are deployed at the edge (on a CPE device) and some are deployed at the SP cloud and shared between all CPEs. A service chain is created from the VNFs at the edge through the VNFs deployed at the central cloud.
From a TOSCA modeling standpoint both scenarios of a wholly edge contained service or a service that spans multiple clouds, are supported.
To support this, Cloudify comes up with a set of service design patterns. Among the service patterns is a blueprint composition pattern where each blueprint describes a microservice and the master service blueprint defines a service which is composed of the individual microservice blueprints, as shown in Figure 3. </p
Moreover, a new microservice can be added or an existing one could be modified on the fly, in real time, and Cloudify’s TOSCA-based model and workflow execution engine will apply the changes to a running service blueprint. This is achieved thanks to Cloudify’s deployment update mechanism, which traverses a DAG (directed graph) of nodes and instances kept in memory on which the deployment is running.
In summary, with the proliferation of endless potential applications based on AR, wearable devices, IoT, connected cars, ships, drones, etc, there is an immediate need for local data processing and local real-time decisions. Local edge platforms are emerging already and that shifts the challenge to the orchestration platform fabric that includes the following:
- Lightweight and autonomous local edge orchestration
- A master orchestrator
- Edge to master communication
- Out of band monitoring fabric
- Distributed service modeling
This makes things more interesting and challenging from an orchestration perspective, and Cloudify is definitely ready to take this head on.