In this post I’d like to share some thoughts on managing hybrid clouds, more specifically, hybrid cloud with an open and hybrid stack running on top. I don’t want to overwhelm with too much detail, so I’m going to focus on the general concept, based on our vision and perspective, on how to manage and orchestrate applications on a hybrid cloud with an open source technology stack.
Orchestration is about automation and intelligent synchronization of multiple dependent processes. We’ve witnessed a major shift in our industry in the past decade, in which basic service has become available via standard interfaces. This change allows automation of all the components and services. But where we want to get is, in essence, to the point where applications are self-aware, aware of the clouds they run on, and can consume the clouds in most efficient way.
For example, if you’re running on OpenStack, you want to consume OpenStack resources as best as possible. Or, if you’re running on Azure, or even OpenStack and Azure in hybrid cloud mode, you want to be able to consume resources in the best possible manner each platform offers. Our goal is to have the application be self-aware and react based on the its location in the cloud. if we look at a typical application, which is multi-tier, has multiple components, multiple virtual IP addresses, etc., all which reside with different components as the basis of the application’s architecture. So, you have some parts that use specific networking components, for example, to enable provider networks or to enable GRE tunnels, in OpenStack.
All these use various types of components and various types of technologies which, together, construct your application. The idea is that we have multiple technologies and multiple API’s that we can leverage to construct the application.
Now, what if we could use all those resources, model our application once, and then use that same model with multiple technologies? For example, if you’re deploying your application on OpenStack and tomorrow you would like to use containers within that same application, you shouldn’t have to model the application again or do any manual transitioning of that application. You should be able to consume that application within the same model that you already created.
Different Types of Orchestration
There are different types of orchestration. Infrastructure-specific orchestration focuses on being able to orchestrate resources of a specific infrastructure. OpenStack Heat is a good example of that as it only orchestrates OpenStack resources. In the container world, you have Kubernetes which is an example where you use another tool to orchestrate container-based workloads.
Cloudify is a pure play orchestrator, able to consume all those types of resources and model the application once using various plugins for other orchestrators, containers, and any other tool in your stack. We use TOSCA to model this application and since TOSCA is vendor agnostic, you just describe your application and consume the various technologies underneath.
How TOSCA Works
TOSCA stands for Topology Orchestration Specification for Cloud Applications. It basically works by describing all components in the architecture of your cloud application. You can describe microservices, virtual machines, applications, databases and others, defining their dependencies, requirements and relationships.
For example, a database which is contained in a virtual machine and an application that’s reading data from the database. In this case, you describe the topology of that application as part of that modeling process, and then you describe the workflows to deploy and undeploy those applications.
These workflows could be vendor specific or they could be generalized using the general workflow mechanisms. As part of the workflow definition you describe how you want to install or uninstall those specific components that are part of the topology. Lastly, you have the policies where you describe and define the policies for what should happen, for example, when your database goes down or when you need to scale out your web application.
Deploying the Application
Once you have the TOSCA blueprint ready, Cloudify takes it and, using various plugins, deploys it onto clouds of choice. Cloudify provisions the VMs on your cloud using the various plugins, and configures all the components. For example, if your application has some parts using Ansible playbooks or Puppet modules to deploy the specific application server, you model the whole thing in the TOSCA application blueprint and then use all the components to
provision, configure, and manage that application.
Cloudify lets you integrate all the various tools that you already use and enables you to extend them, whether you are working with a hybrid cloud model or containerized and non-containerized components. If you’re working with Kubernetes, for example, you can use our Kubernetes plugin which will create a new node type as part of that application. You won’t have to change the whole model of the application, only specific components.
With Cloudify and TOSCA, you can define multiple clouds in the same blueprint, for the same application, using the same orchestration engine, and yes, multiple technologies within that application – and deploy them repeatedly without changing your core code.
Further Reading and Demos
Here are some other resources, including video demos and sample blueprints that might interest you.