Cloudify Meets Kubernetes – Container Management & Orchestration on Bare Metal

 

Kubernetes | Docker | Docker Orchestration | Docker Images | Container Management | Container Orchestration | Microservices

Cloudify lives at the extreme end of the “unopinionated” spectrum of application orchestration tools. Kubernetes (http://www.kubernetes.io), on the other hand, is a container orchestration system that is very opinionated. For those committed to a container based deployment architecture, it’s a great choice, especially for supporting microservices, a good reference on this can be found at: http://martinfowler.com/articles/microservices.html.

For those with a heterogenous environment, possibly including Kubernetes, Cloudify

provides a solution that can orchestrate everything under one umbrella. (You can also check out the Cloudify team’s excellent talk from OpenStack Vancouver

on just this topic).



Try the Kubernetes Orchestration as a Service Tool today!  Go


This post reviews some recent work I’ve done to provide a means for Cloudify to manage Kubernetes clusters at a high level as part of a heterogenous

environment.

Integration Overview

Perhaps the most fundamental integration with Kubernetes would be to assume an already existing cluster, and simply connect to it and issue commands.

While this is a valid use case, it was a little too basic for my taste. So the initial ambition was to use Cloudify to install a Kubernetes cluster on bare metal (or

bare VMs in my case).

The approach was to create a Cloudify plugin that defined a couple of types that represented the basic components of a Kubernetes cluster: a master node and a minion

node. The master host in Kubernetes is equivalent to the manager host in Cloudify. Minions manage the container lifecycle on hosts across the cluster. Since

Google provides handy docker images for the

various component services of Kubernetes, I used these and automated Google’s instructions for setting up a multi-node

cluster.

So in a kind of odd twist, Cloudify is orchestrating docker containers to enable a system that

orchestrates docker containers. The main difference between Cloudify’s “normal” container orchestration and the approach described here, is that each

individual container isn’t a blueprint node. The blueprint nodes represent the hosts only. Once Kubernetes is up and running, it’s on it’s own, at least in this initial

bare metal version.

Running it

This initial version only supports Ubuntu (only tested on Ubuntu 14), and assumes that docker is preinstalled and running. It also assume that Python and

apt-get is installed, has internet access, and has passwordless ssh and sudo setup. Grab the source at: (http://github.com/dfilppi/cfy3/kub).

This post was written for release 1.4. Edit the “barevm-blueprint” and put in your IP addresses and ssh info.

Install a Cloudify CLI, and run:

cfy local install-plugins -p barevm-blueprint.yaml

cfy local init -p barevm-blueprint.yaml

cfy local execute -w install

Enjoy your new Kubernetes cluster.

Plugin Design

The initial plugin design is quite simple, and only defines two node types, master and minion, and a relationship. Currently, since the initial support is for

bare metal, IP addresses and ports are encoded directly into the blueprint. Since the Kubernetes cluster is being treated as a separate entity, agentless

orchestration is used via the Fabric plugin. This was deemed appropriate because Kubernetes

provides its own container management and scaling capabilities. A nice side effect is that the blueprint can be run easily in local mode. It currently only stands up a Kubernetes cluster, it does not have logic to

tear it down or run related workflows yet.

Implementation Details

 

The current implementation utilizes the Cloudify Fabric (ssh) plugin, and essentially boils down to automating the steps outlined in Google’s documentation

for multi-node docker-based installation. The example blueprint is very concise and can be executed directly from the Cloudify local mode (without starting a

Cloudify manager).

IPs are specified, since the cluster is being constructed on existing running hosts (running Ubuntu 14.04). Each of the node types has a Fabric task script

that is run to set up its particular kind of host (master or minion/node). The custom relationship merely passes the IP address and port from the master to the

minion for use in its setup. The ssh_username and key are passed along to Fabric so it can connect and run commands and move files.

Limitations and next steps

 

This is a simple first step. Over time more features and use cases will be added.

  • More OS support (CoreOS particularly)
  • Custom workflows for performing lifecycle operations on the Kubernetes cluster (basically kubectl commands wrapped as workflows)
  • Full lifecycle support (uninstall)
  • Cloud blueprint with agents to take advantage of Cloudify’s VM level auto-healing.

Kubernetes is a great system for managing containers and to some degree applications in those containers. Cloudify can be used to manage Kubernetes

in a blended environment of containers, virtualization, Cloud platform, and bare metal.

All of the code is available at: (https://github.com/dfilppi/cfy3/tree/master/kub), currently at release 1.4.

Comments always welcome.

 



Leave a Reply