Declarative, model-driven orchestration is an elegant and streamlined approach to solving the needs of NFV orchestration. Utilizing TOSCA as a declarative language to model network services and functions for the needs of NFV makes perfect sense, but we need to ensure old habits don’t creep in and cripple the whole NFV solution.
Hard-coding and scripting the manual step-by-step procedures of an on-boarding process is NOT orchestration but can be used as part of it. I’ve had to repeat this concept too many times recently. In this article, I will take a high-level look at why model-driven orchestration is a fundamentally more robust method than business process workflows by comparing their approaches to NFV orchestration, and offering an approach to leverage existing BPMN workflows with TOSCA declarative orchestration.
Join our Kubernetes webinar – Moving Monoliths to Microservices. Go
Why Telcos Need NFV Orchestration
Until very recently, communication service providers (CSPs) went through a manual and drawn-out process for on-boarding services that could take up to 18 months to make a new service available to the end user. The manual on-boarding process includes instantiating the service alongside all of the dependent components it requires, and then integrating the service with the other existing services in the CSP network.
This long and cumbersome process is a major challenge for CSPs to overcome. Network Functions Virtualization (NFV) proposes a more modern approach for offering services. NFV allows communication service providers much greater flexibility by following a software-based approach using off-the-shelf hardware. When on-boarding new services the whole process is fully automated with the help of an orchestrator. The main advantages of NFV over the traditional approach are much greater flexibility and speed of service delivery, since the solution is software defined and much quicker to instantiate as a result of automation. Operators are looking to deploy new services quickly and require a much more flexible and adaptable network – one that can be rapidly and easily provisioned and activated.
To achieve this pinnacle and reach a high degree of flexibility the operator needs to be able to automate the entire end-to-end process, including instantiation of virtual resources, configuration of networking devices, instantiation and configuration of software components, as well as have the ability to coordinate them with existing running processes. This mostly results in running a large set of coordinated tasks that instantiate the new service and chain it with existing operational services.
It is the role of the orchestrator to coordinate dependent and independent components when onboarding a new service. After the service is onboarded – a day zero operation – it is the orchestrator’s role to instantiate – day one operation – and then manage the day two operations of that service. Day two operations commonly include scaling the different tiers of the service, maintenance of the service, service upgrade, and others.
Declarative Model-Driven Orchestration
Model-Driven Orchestration helps to on-board services, instantiate them, and then run and manage day 2 operations, with a simple, and very powerful approach. It allows creating the model of a service using a declarative, human readable language – TOSCA.
The service model describes the service’s components, relationships between components of the service, dependencies, requirements, and capabilities of each component in the template. The model declares reusable types and uses them to construct the structure of the application. Types can represent Nodes and Relationships helping to define connections between nodes of the model.
After we have the Nodes defining the model, node types and relationship types can define their lifecycle interfaces and trigger their implementation. The service template, in fact, represents the graph of the application. Once we have the application graph, typical application workflows are based on the service graph.
Common workflows, such as Install/Uninstall, traverse through the service graph and initiate lifecycle operations of each node of the graph and interact with the state of the service state. Other operational workflows such as Scale, Heal, Upgrade, Maintenance and others query the graph described by the template, query the service state, then execute a specific set of operations based on the graph and the context in which the workflow is executed. Interaction with the graph can read information and modify the service state.
The service state enables interacting with the service graph and discovery of relationships and state of components and services. Additionally, the service state allows passing contextual properties between components, allowing transactionality and idempotency of operations.
For example, when executing the Install workflow on a service template, an OpenStack Instance is provisioned, the service state keeps information about the instance, including its credentials and connection details. When the next node of the graph is executed, it can pull contextual information from the service state to perform the next needed operation, for example, connect to the VM instance and install software.
This is in contrast to standalone workflows, such a with BPMN, which are standalone, one-off, implementations of specific task-oriented cases. Once execution of a BPMN workflow is completed, the contextual information is lost and the workflow engine has no information on the application graph, let alone introducing changes to the application graph. Thus, performing operations on BPML instantiated services require developing new custom workflows that match only the latest state of the service.
Business Process Workflows
Business process workflows are an imperative, scripted organization of tasks performed in a specific order for implementing a use-case. A workflow allows defining a repeatable, step-by-step procedure which can represent business cases such as purchase orders, shipment fulfilment, or a process for onboarding a new service. Workflows can define repetitions of tasks, and include if statements for conditional execution.
As such, each workflow is a tailor-made implementation of a certain flow or use-case. This approach may be suitable for immutable business processes which are static and predefined by nature, but they introduce difficulties and unnecessary risk in software defined environments where changes are wanted and can occur frequently.
What happens when a change, such as replacing a component, is required in the process? What happens when there’s a need to implement a different use-case using similar components? All these would be cumbersome to implement and would require re-implementing the entire use-case while posing fundamental challenges in integration with existing, additional running components.
The Object Modeling vs Printer Instruction-Sets Analogy
Other domains employ a very similar approach. TOSCA allows defining the service model which describes “WHAT” it is we want to achieve, instead of scripting an immutable “HOW-TO” workflow. In the manufacturing domain, CAD models are used to design physical objects. Imagine what would it take for hardware engineers to write direct instruction-sets for the CNC manufacturing machines or script an instruction-set to a printer to print this article, instead of using a word processor for writing text, and letting the printer driver translate the text (model) into printer instruction-sets (TOSCA nodes and relationships types lifecycle implementations), telling it to make the physical movements and spread ink on a piece of paper.
Utilizing BPMN with TOSCA
TOSCA allows defining the service graph and then executes a set of operations on the declarative, model-based graph. Operations of each node rely on external implementation. This mechanism allows choosing the implementation of each node type, which can be a BPMN workflow, allowing model-driven orchestration while utilizing existing BPMN artifacts.
The Model-driven approach allows a much more simplified implementation for constructing and orchestrating new services. It allows users to focus on the services themselves instead of focusing on tasks needed to be performed. It makes it easier to modify and interact with existing services, since we have the service graph and contextual state of each node in the graph, thus eliminating the need to reprogram every single use case and rather just interact with the service directly.