Kubernetes is at the top of the container food chain these days and we are seeing organizations run, or rather sprint, to migrate their services over to the ubiquitous platform. Cloudify is a service orchestrator that glues together legacy, network, container, serverless, and all your other services and ensures full control over your environments without requiring the organization to swap out their existing infrastructure.
Our goal in integrating with Kubernetes is to enable users to connect all of their workloads – whether VM-based, in a database, serverless, or bare metal – while also ensuring the ability to access everything natively as Kubernetes services. In other words, enabling integration of your cloud, legacy, and other services without the hassle and cost required to migrate them.
Try the Free Multi-Cloud Kubernetes Lab!
In this post, I will dive into our various integrations with Kubernetes and discuss how they can all be tied together to extend K8s to work seamlessly with any other service as if those external services were just another Kubernetes service.
In order to support this effort, we hit four different integration points with Kubernetes:
- Cloudify Kubernetes Plugin
- Cloudify Kubernetes Provider
- Cloudify Helm Integration
- Cloudify Kubernetes Service Broker
First things first – spin up a cluster with cloudify
Before we get into the details of our integration, it’s worth noting that we also have a simple Cloudify blueprint that spawns a K8s cluster with our Kubernetes Provider already installed. You can install the blueprint on Azure, AWS, GCP, OpenStack, and vSphere.
NOTE: This deployment is an example only. In order for this to work properly for you, you must either run the Cloudify Environment Setup blueprint first on a clean machine or ensure your environment has the minimum resources provisioned required for this example.
Cloudify Kubernetes Plugin
The original integration point with Kubernetes is through the plugin. The Plugin enables users to create and delete resources hosted by a running Kubernetes cluster such as adding new nodes to a cluster, orchestrating hybrid cloud scenarios with regular services and microservices in a single deployment, and associating pods with particular nodes in your cluster.
Cloudify Kubernetes Provider
In December 2017 we launched the Cloudify Kubernetes Provider, which is a cloud provider for Kubernetes. The Cloudify Provider, written in Go, sits between the infrastructure and Kubernetes and gives users the ability to create infrastructure resources on Kubernetes, such as cluster nodes, load balancers, and volumes. Along with TOSCA, these resources can be deployed on multiple clouds using a single provider, thereby avoiding the need to create new clusters per cloud.
For example, when a new Kubernetes Node is created, Kubernetes asks Cloudify to provision the necessary compute, network, and storage resources. Cloudify effectively becomes your cloud provider, or multi-cloud provider (AWS, Azure, Openstack, GCP, vSphere, Bare metal, etc). Once Cloudify provisions the resources, they are sent back to Kubernetes for its consumption.
Recently, we have added the ability to migrate an existing Kubernetes cluster to use the provider and simplified Kubernetes deployment support considerably on Azure and other clouds.
Cloudify Helm integration
About one month ago we announced our Helm integration. Helm is the Kubernetes package manager responsible for installing and managing Kubernetes applications. The Helm plugin is based on our Fabric plugin and can install any Helm Chart on any given Kubernetes cluster as well as connect it with existing resources or services already deployed.
Cloudify Kubernetes Service Broker
The final integration point with Kubernetes is the Cloudify Service Broker implementation. The Service Broker offers a catalog of external services that users can natively access within Kubernetes using the Open Service Broker API. In this manner, users can list their Cloudify resources, i.e. blueprints, and use Kubernetes (kubectl) to provision to or from Cloudify Manager.
Putting it all together
Cloudify’s tight Kubernetes integration enables organizations to seamlessly connect their legacy services with their cloud native workloads and orchestrate the full lifecycle of services, all while using Kubernetes natively.
Each element plays an integral role in this integration from automating the deployment of clusters to adding nodes and associating pods with the plugin, creation of infrastructure resources with the provider to package deployment with the Helm integration, as well as native Kubernetes resource provisioning with the service broker.
Organizations are constantly looking to keep up with the latest technologies, but are sometimes slow to do so when it comes to losing their investment in legacy applications and physical infrastructure. With Cloudify’s service orchestration capabilities and holistic integration with the Kubernetes ecosystem, organizations can keep their current architecture and incorporate their cloud native workloads and stagger their migration to pure cloud.
Test it yourself
You can also see all of this in action yourself with a free multi-cloud lab environment in which you will be able to test drive:
- Deploying a Kubernetes cluster
- Deploying and chaining services together in a multi-cloud environment (AWS and OpenStack)
- Multi-tenancy capabilities
- Lifecycle management such as auto-healing and scaling, and more
Multi-Cloud Kubernetes Lab Environment
Real life use cases
Check out these powerful Cloudify use cases on deploying Kubernetes clusters, orchestrating complex applications, and service chaining them in multi-cloud environments: