In this post I’d like to share some thoughts on hybrid cloud orchestration, more specifically, a hybrid cloud orchestration with an open and hybrid stack running on top. I don’t want to overwhelm with too much detail, so I’m going to focus on the general concept, based on our vision and perspective, on how to manage hybrid cloud orchestration with an open source technology stack.
Orchestration is about automation and intelligent synchronization of multiple dependent processes. We’ve witnessed a major shift in our industry in the past decade, in which basic service has become available via standard interfaces. This change allows automation of all the components and services. But where we want to get is, in essence, to the point where orchestration tools are self-aware, aware of the clouds they run on, and can consume the clouds in the most efficient way.
For example, if you’re running on OpenStack, you want to consume OpenStack resources as best as possible. Or, if you’re running on Azure, or even OpenStack and Azure in hybrid cloud mode, you want to be able to consume resources in the best possible manner each platform offers. Our goal is to have the application be self-aware and react based on its location in the cloud. If we look at a typical application, which is multi-tier -has multiple components, multiple virtual IP addresses, etc., all which reside with different components as the basis of the application’s architecture. So, you have some parts that use specific networking components, for example, to enable provider networks or to enable GRE tunnels, in OpenStack.
All these use various types of components and various types of technologies which, together, construct your application. The idea is that we have multiple technologies and multiple API’s that we can leverage to construct the application.
Now, what if we could use all those resources, model our application once, and then use that same model with multiple technologies? For example, if you’re deploying your application on OpenStack and tomorrow you would like to use containers within that same application, you shouldn’t have to model the application again or do any manual transitioning of that application. You should be able to consume that application within the same model that you already created.
Hybrid Cloud Orchestration Tools
Orchestration is the automation of high level operational activities through a series of steps, either defined explicitly (via scripting/workflow), or implicitly (via a model) to achieve an outcome.
Hybrid cloud orchestration adds yet another layer of abstraction above and beyond that provided by cloud platforms (which already abstract servers, storage, and networking). Since one of the biggest rationales for selecting hybrid cloud architectures in the first place is the ability to use best of breed features from various cloud platforms, a pluggable platform is almost always more desirable than a high level abstraction that obscures cloud differences, and leads to settling for the lowest common denominator in features – taken from benefits of hybrid cloud orchestration.
There are different types of hybrid cloud orchestration tools. Infrastructure-specific orchestration focuses on being able to orchestrate resources of a specific infrastructure. OpenStack Heat is a good example of that as it only orchestrates OpenStack resources. In the container world, you have Kubernetes which is an example where you use another tool to orchestrate container-based workloads.
Cloudify’s hybrid cloud orchestration tools are able to consume all those types of resources and model the application once using various plugins for other orchestrators, containers, and any other tool in your stack. We use TOSCA to model this application and since TOSCA is vendor agnostic, you just describe your application and consume the various technologies underneath.
How TOSCA Works
TOSCA stands for Topology Orchestration Specification for Cloud Applications. It basically works by describing all components in the architecture of your hybrid cloud orchestration application. You can describe microservices, virtual machines, applications, databases and others, defining their dependencies, requirements and relationships.
Take for example, a database which is contained in a virtual machine and an application that’s reading data from the database. For this case, you describe the topology of that application as part of that modeling process, and then you describe the workflows to deploy and undeploy those applications.
These workflows could be vendor specific or they could be generalized using the general workflow mechanisms. As part of the workflow definition you describe how you want to install or uninstall those specific components that are part of the topology. Lastly, you have the policies where you describe and define the policies for what should happen, for example, when your database goes down or when you need to scale out your web application.
Deploying The Application
Once you have the TOSCA blueprint ready, Cloudify takes it and, using various plugins, deploys it onto clouds of choice. Cloudify provisions the VMs on your cloud using the various plugins, and configures all the components. For example, if your application has some parts using Ansible playbooks or Puppet modules to deploy the specific application server, you model the whole thing in the TOSCA application blueprint and then use all the components to provision, configure, and manage that application.
Cloudify lets you integrate all the various hybrid cloud orchestration tools that you already use and enables you to extend them, whether you are working with a hybrid cloud model or containerized and non-containerized components. If you’re working with Kubernetes, for example, you can use our Kubernetes plugin which will create a new node type as part of that application. You won’t have to change the whole model of the application, only specific components.
With Cloudify and TOSCA, you can define multiple hybrid cloud orchestrations in the same blueprint, for the same application, using the same engine, and yes, multiple technologies within that application – and deploy them repeatedly without changing your core code.
Further Reading and Demos
Here are some other resources, including video demos and sample blueprints that might interest you.