Migrating Pods with Containerized Applications Between Nodes in the Same Kubernetes Cluster Using Cloudify
For those well-versed in the cloud-native game, Kubernetes is the DevOps tool. It’s common practice these days for companies to adopt a container- and Kubernetes-driven architecture, an approach we can really get behind. In fact, in a recent episode of the Cloudify Cloud Talk podcast, we chat about how Kubernetes meets the Cloudify world.
That said, use cases in cloud-native environments can vary widely, and we are always happy to demonstrate how Cloudify meets organizational needs. In the following use case, we were approached by a large service provider who wanted to see how Cloudify would orchestrate the transfer of an application instance, moving a pod from one node to another in Kubernetes, within the same cluster.
Read on to learn more about Kubernetes live migration, in a real-world scenario.
Pods and Nodes in Kubernetes
Let’s start with a little terminology, to ensure a common understanding before we get into the technical examples.
What are pods in Kubernetes?
Pods are the smallest deployable units you can create and manage in Kubernetes. A pod is a group of one (or more) containers, utilizing shared network and storage resources. By nature, the contents of a pod are always co-scheduled and co-located, and they are run in a shared context.
What are nodes in Kubernetes?
Workloads in Kubernetes are run by placing containers into pods, and then running them on nodes. A node can be either a physical or virtual machine, depending on the cluster. Each node contains the services necessary to run pods, and are managed by the control plane.
Typically, a cluster contains several nodes. Kubernetes node components include the kubelet, a container runtime, and the kube-proxy.
Now let’s look at a scenario for Kubernetes pods migration.
Use Case
Our team decided to use Kubernetes running on OpenStack to demonstrate how to migrate containerized applications between nodes. This solution allows users to define a specific node for deployment of an application.
The idea is to avoid service disruption by migrating pods from one node to another, each with a containerized application, within the Kubernetes cluster.
Here’s how that looks:
The user provides:
- Node1 label
- Node2 label
- External IP address to be exposed
On Node1, Cloudify deploys two pods, each with containerized applications:
- App container in Pod1
- DB container in Pod2
The containers are grouped and exposed as a Kubernetes Service at http://<ip-address>:8080.
The Kubernetes pods (with their containers) are moved to Node2 using a custom workflow operation with Cloudify:
The Cloudify Approach
To start, we use Cloudify to deploy the database pods and containerized application in Node1:
- Log in to Kubernetes Master, and check state prior to blueprint installation. The Kubernetes cluster does not have any configured pods, or deployments at this stage.
- Configure K8s node labels (in this case, one labeled k-m-h and the other k-n-h:
a. Node1: kubectl label nodes <node1-name> demo=node1
b. Node2: kubectl label nodes <node2-name> demo=node2![]()
- Configure blueprint inputs:
- Kubernetes configuration:
Can be retrieved with the command
kubectl config view –raw
External IP:
For client access, should be one of the K8s cluster interfaces.
- Install blueprint with configured Node1 Kubernetes Deployment:
cfy install blueprint.yaml -b demo
2. App and DB are deployed on Node1.
3. Kubernetes Service exposes application port.
4. The resulting deployment shows two Pods present on the K8s cluster in Node1 (k-m-h) and the service is available at http://<ip-address>:8080.
Next, we use a workflow we called configuration_update in order to move the application and database to the new Pod2 in Node2 of the cluster. Here is how it’s done:
- Run the following two commands. The first will render new YAML templates describing pods with node labels pointing to Node2. The second will run the update operation from the Kubernetes plugin using the YAML templates rendered by the first call, which will apply changes to the cluster and move the pods. Here are the calls:
a.
cfy execution start configuration_update -d demo -p ‘{“node_types_to_update”: [“MovableKubernetesResourceTemplate”], “params”: {“node_label”: {“demo”: “node2”}}, “configuration_node_type”: “configuration_loader”}’
b. cfy execution start configuration_update -d demo -p ‘{“node_types_to_update”: [“MovablePod”], “params”: {“node_label”: {“demo”: “node2”}}, “configuration_node_type”: “configuration_loader”}’
2. The resulting deployment leaves us with two pods present in the K8s cluster, and the containerized app and database pods are now in Node2 (k-n-h). Node1 has no deployments.
- The Kubernetes Service is still exposed and the application is accessible at http://<ip-address>:8080
Kubernetes live migration can indeed be mastered using Cloudify. For a full demo, contact us.