Model-Driven ONAP Operations Manager (OOM) On-boarding with TOSCA and Cloudify
Introduction
ONAP Operations Manager (OOM) is an ONAP tool to efficiently deploy, manage and operate the ONAP platform and its components (e.g. MSO, DCAE, SDC, etc.).
The OOM project takes a cloud-native approach to manage the OOM cluster and relies on Kubernetes as the main platform to manage ONAP services.
In this post, we will focus on a sub-project within the OOM (OOM with TOSCA and Cloudify) that utilizes TOSCA to manage the K8s cluster as well as the ONAP services running on top of the K8s cluster.
ONAP Operations Manager Architecture
TOSCA in ONAP
ONAP is already using TOSCA to manage highly demanding and large scale network services.
There are a lot of commonalities between the needs of managing network services and managing the ONAP services themselves (both runs as cloud services, have dependencies, scaling, healing etc..). In addition to that, ONAP itself uses TOSCA (through Cloudify) to manage one of its core components (DCAE), so it only makes sense to combine the two and leverage TOSCA to manage ONAP itself. This will provide a consistent operational model for managing both NFV and ONAP services.
Managing ONAP with TOSCA/Cloudify
ONAP itself is comprised of a fairly complex set of microservices that runs on the Kubernetes microservices orchestration platform. Let’s take the ONAP MSO application for example, which requires MariaDB (MariaDB as well DCAE can also run as external services outside of K8s). In TOSCA we can write a relationship that says MSO is connected to MariaDB and express this dependency both at run-time and in the topology visualization.
Kubernetes is also built from a set of interdependent services that can be mapped through TOSCA in a similar fashion. In this context, it made sense to map the K8s services, as well as the ONAP services, through TOSCA and, in that case, have a consistent operational model to automate the deployment and provisioning of both the K8s and ONAP clusters.
Managing multiple ONAP instances through a common manager
Using TOSCA/Cloudify as an independent management layer allows users to manage multiple deployments of ONAP through the same Cloudify management model.
This is useful in the following use cases:
- Managing Dev, QA, Production
- Multi-Site deployment
- Multi-Cloud deployment
ONAP deployment options using Cloudify
How it works
OOM uses containers and Kubernetes as the platform to run and manage an ONAP cluster.
- Cloudify can deploy Kubernetes on any cloud, e.g. OpenStack or bare-metal.
- Cloudify can deploy ONAP services on any running Kubernetes Cluster.
In this section, I will explain how Cloudify deploys a Kubernetes cluster first and then deploys ONAP services on top.
Cloudify ONAP Architecture
The steps to deploy ONAP are as follows:
- Cloudify deploys K8S on OpenStack by provisioning one VM for the K8S Master and three VMs for K8S nodes. In this part, we use TOSCA to model a Kubernetes Cluster which consists of the Master & Nodes. In the next step, we provision workloads on top of the K8S cluster. Here is a link to the TOSCA blueprint.
- This step can be executed independent of the previous one. It runs an ONAP blueprint to deploy ONAP services on top of a K8S cluster (an existing one or as provisioned in the previous step). It uses TOSCA modeling of ONAP applications as native Kubernetes microservices and a K8S templates to deploy ONAP workloads.
Kubernetes deployment as shown in Cloudify UI
In the above figure, we see that Cloudify successfully deployed Kubernetes – master, nodes, security groups, networks, and public IPs for exposed services. It also can scale K8S nodes based on a policy.
See the video demo below showing Cloudify deploying a Kubernetes cluster on OpenStack.
[embedyt] https://www.youtube.com/watch?v=mKmyXFc7j14[/embedyt]
The next step is to run ONAP services on top of the provisioned Kubernetes cluster. The next figure presents a schematic diagram of the ONAP application services, and the blueprints that describes it is here.
ONAP apps TOSCA blueprint
Managing the ONAP Cluster
- You can view the ONAP microservices topology using Cloudify UI. Moreover, if you want to use Kubectl – simply point Kubectl to the K8s master (one of the TOSCA blueprint outputs is the K8S master).
- In order to monitor K8S through cloudify you have two options:
- Extending the Cloudify Kubernetes plugin to interface with K8s
- Write a Cloudify Diamond custom plugin to collect the metrics
- Basic health checks are also done by the ONAP Robot app, which checks the health of the other components. With that said, using TOSCA, one can add additional broad end-to-end tests as part of the ONAP TOSCA provisioning blueprint.
Advanced use cases
Running stateful services such as MariaDB and DCAE is better done outside of k8s and the rest of ONAP services on K8s.
Running MariaDB cluster using Cloudify
Many of the ONAP services use MariaDB and other database services. Running stateful services such as databases are better handled outside of K8s. This example deploys a MariaDB cluster with Galera using Cloudify and is supported on AWS, Azure, Openstack, and GCP.
Running DCAE through Cloudify
DCAE is already managed by Cloudify, so managing DCAE by default in a production-grade environment can be done simply by running a DCAE-Cloudify blueprint and pointing it to the ONAP services that runs through another blueprint.
See the video demo below showing Cloudify deploying ONAP on top of Kubernetes.
[embedyt] https://www.youtube.com/watch?v=HQbqsQfoMjg[/embedyt]
Summary
In this blog post we demonstrated how we can run ONAP services as a cloud native cluster using a combination of TOSCA orchestration with Cloudify that drives Kubernetes.
This demonstration shows that TOSCA can be used to both manage the K8s cluster itself and also the services that run on top of K8s.
In addition, we demonstrated how Cloudify can leverage the fact that TOSCA is not bound to K8s thus allowing greater flexibility, and the ability to combine both services that run on K8s along services that runs outside of K8s under the same deployment automation flow. The most common use case that will require this level of support would be managing stateful services such as databases as well as DCAE, but it can be extended to many other use cases where this combination would be useful, such as managing existing network services and Docker Swarm services, legacy services, or more advance serverless services.
In the context of ONAP specifically, which is using TOSCA as the model to run NFV services, there is an even more important benefit – the ability to have the same operational model to manage both ONAP services as well as NFV services using TOSCA.
We believe this would make the ONAP contribution extend the scope of the NFV world and would help to apply the same lessons of running large scale network services to any service.
Summary of the key benefits
- Offered as part of the ONAP OOM project
- Manage K8s cluster from within ONAP framework including healing and scaling
- Allow the flexibility to connect with other existing K8s cluster (such as Rancher)
- Automate the deployment of the ONAP microservices through TOSCA
- Manage ONAP services’ lifecycles as well as heal and scale
- Use the same TOSCA operational model for running NFV and ONAP clusters
- Leverage the TOSCA relationship, inputs, etc to create more dynamic deployments and relationships
- Provides a single pane of glass for managing K8s and non-K8s services, such as stateful services
- Mature multi-cloud support (OpenStack, K8S, VMware etc..)
NOTE: OOM Release 1 is coming soon and Cloudify integration is an ongoing work in progress