Cloudify recently announced its 5.1 release, marking a major milestone in our strategy to position Cloudify up the stack and value chain, as a service-orchestration. Described in the announcement:
“Cloudify 5.1 marks the rise of a new generation of the Cloudify platform, which aims to further help migrating companies solve challenges faced when adopting cloud practises for on premise infrastructure – namely by creating silo-free public cloud/cloud native environments – via out-of-the-box integrations …”
This post aims to expand on what stands behind this move and what is actually meant by ‘service-orchestration’.
The Automation Silos:
The explosion in the number of DevOps tools and automation tools has caused the number of tools used by DevOps teams to skyrocket, as can be seen in the following catalyst investors report:
A separate report from DXC Technology provides an interesting explanation:
“a fairly large percentage — 66% — say their mission-critical systems are so complex they are wary of changing them. Further, 62% responded they lack a common set of tools and platforms across the organization. This creates digital islands: units working with the right technologies but independently of each other.”
This drives the need for an Orchestrator of Orchestrators- a singular tool that will enable interoperability and end-to-end workflow automation between independent tools. We refer to this high level orchestrator as the ‘Orchestrator of Orchestrators’ or ‘Service Orchestration’.
The Current Approach
Most organizations recognize this challenge and depend on a dedicated team to manage all orchestration and automation platforms. Most of this work is done through custom integration scripts and frameworks. This approach is often very limited, and doesn’t provide a generic solution, addressing only a small subset of mostly simple use cases.
This is where a platform approach would be more suitable, as described in the following Gartner diagram.
Using Cloudify as a Decentralized Orchestrator of Orchestrators
One of the key design concepts, behind the Cloudify approach to service orchestration, is known as ‘decentralized orchestration’. The goal with decentralized orchestration is to have zero impact on the way the underlying orchestration is being used.
In this context, the top level orchestrator is used as an overlay that pulls the relevant template artefact from each orchestration on demand. Usually that’s done through Git or a URL to a zip file as described in more details here: Using Cloudify as a decentralised orchestrator
In the context of Cloudify, the integration with the other orchestration domains is done using plugins and currently includes Kubernetes as well as Ansible, AWS Cloud Formation, Azure ARM and Terraform:
Cloudify Orchestrator of Orchestrators features mapping:
Now that we understand conceptually that we need an ‘Orchestrator of Orchestrators’, to manage all our automation tools, we can look into its specific requirements and needs. Below is a list of some of the key features and their respective Cloudify implementation:
- Generic Service Component allows you to plug in any orchestrated element (regardless of the language or orchestration behind it) as yet another infrastructure resource.
- Out-of-the-box service component implementations -This is achieved through the out-of-the-box support for Kubernetes (OpenShift, GKE, EKS, AKS. KubeSpray), Ansible, AWS Cloud Formation, Azure ARM, Terraform. Users can also add their own custom orchestrator through either the generic REST plugin or via a custom made plugin through the Custom Plugin framework.
- Dependency and relationship between service components,
- Interoperability between orchestration services – achieved through capabilities, inputs, outputs, custom-plugins and context information.
- Workflow and cascading workflow – allowing users to execute nested workflow between services based on their dependency graph.
- Shared service – is used to handle the case of one service that multiple services depend on.
- Zero touch provisioning – this is achieved through shared-service relationship, and shared service workflow.
- Service Composition DSL (Domain Specific Language) – defines the syntax definition for putting together all of those services. It also includes IDE and code completion support using the JSON Schema project as outlined here.
- Managed as code through CI/CD and API – achieved through the built-in CI/CD integration plugins.
- Consistent management – achieved through the cloudify console with specific nested service topology view support that allows simple navigation between interdependent services.
- Service catalog – Self service portal for sharing and deploying pre-templated services.
Want to really know how to remove automation tool overload? Sign up for our webinar here.
Cloudify 5.1 Key Features
Cloudify 5.1 comes with lots of new features, including support for Python 3, non privileged containers, IDE support etc. Below are listed four key areas in the release that were designed to simplify the multi-cloud management and the transformation to cloud native and public cloud…
- Modernizing on-premise environment for a fully automated environment, managed as code, using similar DevOps tools and practices as with public cloud.
- Supporting multi-kubernetes clusters between on-premise and public cloud.
- Environment as a Service – out-of-the-box integration with built-in CI/CD tools and infrastructure orchestrations.
- Self-service portal and catalog services allow access to certified environments and a new workflow widget allows better visibility on the current execution.
Below is a more detailed description of those features in 5.1. For the full list of features you can refer to the release notes.
Modernizing On-Prem Environments
Many of the existing on-prem environments run on VMware. These environments are complex and often rely on a VMware management tools such as VRA, that is tightly coupled with the VMware stack. Many organizations have already built lots of homegrown frameworks to control and govern access to this environment.
However, to build a successful hybrid cloud strategy, we first need to modernize this on-prem environment to the level where it can also be fully automated and managed as code.
Cloudify 5.1 includes significant enhancement to our VMware stack, which includes support for VSphere 7, NSX-T , vCloud etc. You can read more about our VMware support in this post.
Many of these environments often run on highly secured networks that don’t have internet access and therefore, cannot rely on any public cloud services. Cloudify was designed to run in such an offline environment. Cloudify 5.1 includes revamped clustering with built-in DBaaS based on Patroni and message broker based on RabitMQ. As well as a distributed file system that was designed to support such a secured environment. At the same time, with this release we made those services interchangeable when running on the public cloud. This allows users to employ the same clustering architecture on both offline environments and public cloud, while allowing for the ability to leverage built-in cloud services such as RDS and managed Kubernetes on the public cloud.
Multi Kubernetes Cluster Management
The use cases for multi Kubernetes clusters can be driven by different needs:
- Separate clusters between applications and teams
- Separating between development and production
- Multi cloud – supporting Kubernetes clusters on multiple clouds.
- Avoiding lock in – allowing portability between providers
- Edge / IOT – managing deployment across many distributed Kubernetes clusters on edge devices.
Cloudify employs the same approach used to manage multi-cloud and multi-Kubernetes clusters, as you can see in the diagram below:
In Cloudify 5.1 we added out-of-the-box support for OpenShift, KubeSpray GKE, EKS, AKS. We also added support for Helm 3, among other environments, that was made on the built-in Kubernetes cluster.
Environment as a Service
Creating an end-to-end environment requires integration with both infrastructure resources and CI/CD. Cloudify 5.1 includes built-in integration with leading orchestration and CI/CD tools, as described in the diagram below.
This integration will allow users to continually update their environment just by updating the Git repository. The deployment and deployment update operation will be triggered implicitly through the integration with Git Actions as described in detail in this post.
Self Service Portal and Many UX Enhancements
Cloudify 5.1 includes many UX enhancements. The service catalog was extended to support native service templates from other orchestrations, Terraform topology view, workflow widget.
You can read the full details on the 5.1 Release notes
Getting started with Cloudify’s ‘Orchestrator of Orchestrators’ and EaaS
Using Cloudify as an ‘Orchestrator of Orchestrators’ to run the same application (NodeJS) across multi-cloud and orchestration tools:
The following example demonstrates how we can use the Orchestrator of Orchestrators concept, that was described earlier in this document, to deploy the same application (Node.js web server in this specific case) on a selection of infrastructure environments.
The infrastructure can be one of the following:
- Amazon Web Services (Cloudify)
- AWS – (Terraform)
- AWS – (Cloudformation)
- Google Cloud Platform (Cloudify)
- Azure (Cloudify)
- Azure – (ARM)
- Openstack (Cloudify)
- VMware (Cloudify)
The infrastructure deployment consists of:
- all of the essential peripherals in each infrastructure (IP address, NIC, etc…).
The second deployment consists of the chosen infrastructure, Node.js, Node.js http-server module, and a sample page.
For more specific instructions on how to run this example see the Cloudify getting started guide.
In Cloudify 5.0 and Cloudify 5.05 we announced our first Service Composition DSL which created the foundation for modeling a system that consists of multiple orchestration domains. In Cloudify 5.1 we deliver many out of the box integrations, with the most popular orchestration framework in each domain. In addition, we continue to improve the foundation of Cloudify itself. Making it more scalable and simpler to manage, by moving to Cloud Native and Micro Services architecture of our own infrastructure.
Cloudify 5.2 will expand our EaaS vision and add a new placement policy. This will allow us to deploy services on multiple Kubernetes and non Kubernetes clusters across multiple clouds and edge devices using a single command. The placement policy will allow us to move a large part of the complexity of matchmaking between the service and the target environment dynamically. This will allow users to manage complex deployment workflows that span across multiple Kubernetes clusters, handle partial failure and provide consistent visibility across all the environments.
Cloudify 5.2 is also designed to be SaaS ready. In this context users can use a SaaS based control panel to manage their own private environments. We will contain all this in a Helm based package so that users can choose to run Cloudify as a managed or self-managed service. In that context the self-service portal and composer framework will be enhanced significantly to meet this use case.
Your Cloudify senses should be tingling: we started a secret project that we believe will turn the entire way we think of automation up side down, stay tuned for updates…