Deployment and Orchestration of a Kubernetes Cluster With Auto-Scaling and Auto-Healing in a Hybrid Environment Using Cloudify

At OpenStack Israel 2016, I participated in a presentation where we compared a few cloud orchestrators, among them Kubernetes and Cloudify. In short, I presented Kubernetes as a container-focused orchestrator, while Cloudify I presented as a more general orchestrator. The division isn’t exact. Kubernetes has a lot of integration with infrastructure. And Cloudify has been dabbling in containers for several years.
I was talking about core focus.



Get the latest Cloudify 5.1 for all your multi domain Kubernetes orchestration needs!  Go


At Openstack Austin in April, we presented the first Cloudify Kubernetes Plugin and hybrid cloud example. It deploys a Kubernetes Cluster and allows you to package Kubernetes configurations with Cloudify. The example we provided included the deployment of a regular MongoDB on a VM and a NodeJS app managed by Kubernetes.
This was a good start. It showed that it was possible to manage Kubernetes from Cloudify. However, managing Kubernetes with Cloudify is generally not a typical use case for developers. Kubernetes is successful because it makes managing containers easy.
Cloudify is good at what it does because it makes integrating various systems easy. We want Kubernetes to manage containers, and Cloudify to tie it in with other environments, including non-virtualized, stateful services or legacy apps; in other words, what we call a hybrid stack.

See the Kubernetes cluster deployment and orchestration video

The best place I thought to start was to have Cloudify manage the underlying infrastructure. I recently adapted the Cloudify Kubernetes Plugin to fit what I’ve described; a Kubernetes Cluster running in an environment managed by Cloudify. It is a Cloudify Blueprint that defines the following processes:

  1. Build OpenStack Infrastructure: VMs, Security Groups, etc.
  2. Deploy a Kubernetes Cluster (one master, two or more nodes).
  3. Auto-scaling of the VMs that the nodes run on, plus the nodes themselves.
  4. Auto-healing of the VMs that the nodes run on, plus the nodes themselves.

After deploying this, you have a Kubernetes Cluster that can work with any configuration that you send to Kubernetes. When Cloudify senses that the nodes are not sufficient for the workload it will add VMs to OpenStack, install the Kubernetes nodes on them and add the nodes to the Kubernetes master.
Example auto-scaling code:

Example auto-healing code:

This is a starting point for my ideal multi cloud orchestration model – multiple orchestration technologies working side by side, across multiple environments. Each orchestrator is focused on what it does best.
The next step in this project is to see Cloudify become more aware of Kubernetes’ needs. This will require new modules in Kubernetes as well as new features to the existing Cloudify infrastructure plugins.
Cloudify will react to environmental needs with feedback from technologies like Kubernetes, resulting in highly intelligent self-running code.

comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top