Migrating Pods With Containerized Applications Between Nodes In The Same Kubernetes Cluster Using Cloudify

Kubernetes is decidedly the DevOps tool of 2018 for those companies already deep in the cloud-native game. More and more we’re hearing about companies moving to a container- and Kubernetes-driven architecture. We definitely see the draw for utilizing Kubernetes in a larger role – we recently updated our Kubernetes plugin and unleashed a new Kubernetes Provider – but it’s always important for large organizations to see the bigger picture and understand that K8s is not a replacement for all other technologies.

Try the Free Multi-Cloud Kubernetes Lab!   TEST DRIVE


That being said, we still have many customers that request various cloud-native use cases to see how Cloudify orchestrates them (see what I did there?) and we are more than happy to oblige. In this instance, a large service provider asked us to demonstrate how Cloudify can orchestrate the transfer of an application instance from one Kubernetes node to another in the same cluster.

Use Case

In order to show how to migrate containerized applications between nodes, our team decided to use Kubernetes, running on OpenStack, as it allows users to define a specific node for an application’s deployment. The idea is to move multiple pods on one node, each with a containerized application, to a second node within the Kubernetes cluster, without disrupting the service.
The user provides:

  • Node1 label
  • Node2 label
  • External IP address to be exposed

Cloudify deploys two pods, each with an applications in a container on Node1:

  • App container in Pod1
  • DB container in Pod2

The containers are grouped and exposed as a Kubernetes Service at http://<ip-address>:8080.

Using a custom Cloudify workflow operation, the pods, with their containers, are moved to Node2:

After the migration, the application is still accessible via the same Kubernetes Service at http://<ip-address>:8080.

How Cloudify does it

First, Cloudify will be used to deploy the containerized application and database pods in Node1. The steps to do this are as follows:

    1. Log into Kubernetes Master and check state before blueprint installation: At this stage, the Kubernetes cluster does not have any deployments or pods configured.

    2. Configure K8s node labels (in this case, one labeled k-m-h and the other k-n-h:

        a. Node1: kubectl label nodes <node1-name> demo=node1

        b. Node2: kubectl label nodes <node2-name> demo=node2

 

    3. Configure blueprint inputs:

        a. Kubernetes configuration: Can be retrieved with the command kubectl config view –raw

        b. External IP: For client access, should be one of the K8s cluster interfaces.

    4. Install blueprint with configured Node1 Kubernetes Deployment: cfy install blueprint.yaml -b demo

        a. App and DB are deployed on Node1.

        b. Kubernetes Service exposes application port.

    5. The resulting deployment shows two Pods present on the K8s cluster in Node1 (k-m-h) and the service is available at http://<ip-address>:8080.


Next, we use a workflow we called configuration_update in order to move the application and database to the new Pod2 in Node2 of the cluster. Here is how it’s done:

    1. Run the following two commands. The first will render new YAML templates describing pods with node labels pointing to Node2. The second will run the update operation from the Kubernetes plugin using the YAML templates rendered by the first call, which will apply changes to the cluster and move the pods. Here are the calls:

        a. cfy execution start configuration_update -d demo -p ‘{“node_types_to_update”: [“MovableKubernetesResourceTemplate”], “params”: {“node_label”: {“demo”: “node2”}}, “configuration_node_type”: “configuration_loader”}’

        b. cfy execution start configuration_update -d demo -p ‘{“node_types_to_update”: [“MovablePod”], “params”: {“node_label”: {“demo”: “node2”}}, “configuration_node_type”: “configuration_loader”}’

    2. The resulting deployment leaves us with two pods present in the K8s cluster, and the containerized app and database pods are now in Node2 (k-n-h). Node1 has no deployments.

    3. The Kubernetes Service is still exposed and the application is accessible at http://<ip-address>:8080.

A little more detail

The Cloudify-Kubernetes-Plugin used in this demo utilizes the Kubernetes API, which means the update operation is actually fully performed by the Kubernetes scheduler without Cloudify being aware of the deletion of the old resources on Node1 and the creation of the new resources on Node2 – it only sends the request to the Kubernetes API. As a result, the service downtime is minimal because the highly optimized Kubernetes scheduler consistently tries to keep services up, even during migration.

Demo time

If you would like to see this fantastic demo in action, check out the video below:

More Resources


Large Scale Microservices Whitepaper

See how large scale systems based on Microservices can be designed using TOSCA, and extended with Cloudify.

Read It >>

Hybrid Container Orchestration with Cloudify

Hybrid VNF Container Orchestration With Kubernetes and Docker Swarm Using Cloudify

Watch >>

Multi-Cloud Orchestration for Kubernetes

Look at how Cloudify enables hybrid cloud and hybrid stack orchestration together with Kubernetes.

Watch >>


Leave a Reply

Founded in 2012, Cloudify has robust financial backing from Intel Capital, VMware, BRM Group, Claridge and other leading strategic and financial investors. Cloudify has headquarters in Herzliya, Israel, and holds offices across the US and Europe.