Why Do I Need TOSCA If I’m Using Kubernetes? Part II of II
- December 11, 2017
- Posted by: Nati Shalom
- Category: Kubernetes, TOSCA
In the first part of this series, titled ”Why Do I Need * If I’m Using Kubernetes?”, I discussed why I believe the general sentiment that everything should move to Kubernetes is shortsighted and misguided.
Watch our upcoming Cloudify 4.2 webinar to see what’s in store for Kubernetes! REGISTER NOW
I wanted to follow up that post with an article specifically on how we followed our own advice and took the automation- / orchestration-first approach with our new Kubernetes support. TOSCA is, of course, the standard topology modeling specification used by Cloudify, and specifically ARIA, to model blueprints for orchestration. For background on that topic, you can do some more reading on TOSCA and Cloudify.
I’ll start by covering the Cloudify integration with Kubernetes.
Extending Kubernetes to the rest of the world
The integration of Cloudify and Kubernetes consists of two main points which are completely independent of one another. Each of those elements allows users to connect Kubernetes with a different part of outside world.
Cloudify Integration with Kubernetes
- Infrastructure Orchestration – with the new Kubernetes multi-cloud provider, Cloudify is responsible for instantiating and managing the infrastructure and resources for Kubernetes which include the network, compute, and security resources among others.
- Service Orchestration – this plugin is responsible for orchestrating Kubernetes services alongside non-Kubernetes services under the same automation scheme. This layer is also responsible for managing multiple clusters of Kubernetes.
How TOSCA (Cloudify) fits into the Kubernetes ecosystem?
The Linux Foundation tracks the container and cloud native ecosystem under the CNCF Cloud Native Ecosystem report, and has positioned Cloudify under the provisioning category as you can see the diagram below.
This obviously covers only a slice of the pie, and indeed our focus in 2018 will be to use the same approach that we’ve been using with the ONAP community, namely, contributing Apache ARIA TOSCA as the open orchestration that will allow developers to automate Kubernetes as well as non-Kubernetes services under the same automation scheme.
Using TOSCA (Cloudify) to plug in your choice of the K8s stack
The CNCF diagram above demonstrates just how big the Kubernetes ecosystem is today and how fast it’s evolving. Many of the platforms around Kubernetes, such as OpenShift and CloudFoundry, were built with an opinionated stack and therefore lock you out of the option to use a new monitoring project and many other similar examples exist.
The benefit of the Cloudify approach is its focus mostly on the automation/orchestration aspect, keeping fairly close to the source and un-opinionated. This make it easy to plug in your choice of Kubernetes UI framework, logging, networking or any other element of the stack, and add it to the Kubernetes blueprint.
In a similar vein, one of the things that we’re currently working on is a Helm plugin. The general idea with the Helm plugin is Cloudify will not just talk with Kubernetes API, but will integrate with a long list of available service Charts from Helm as described in the diagram below.
Another integration made possible by this approach will be our integration with the native Kubernetes management UI.
Cloudify and Kubernetes in the real world
Let’s talk about real world scenarios. Here I will cover the three key use cases for using Cloudify and Kubernetes.
The first use case represents a scenario in which we’re using an external Kubernetes cluster – GKE in this specific example – to automate microservices on Kubernetes as well as services on Amazon that are packaged using an AWS AMI, through a single Cloudify manager.
Use Case: Continuous deployment across GCP, GKE, and AWS
In an effort to move quickly and with greater flexibility, a service provider customer wants to manage an existing, traditional cluster of virtual machines on AWS EC2. The services and apps running currently on AWS virtual machines are stable and production-grade. The customer does not want to containerize, re-factor or re-architect the AWS VM cluster at this time. They may do so in the future, but for now they see this EC2 VM cluster in a fairly steady state.
However, they are launching a new service soon which takes full advantage of containerization, Kubernetes, Google Kubernetes Engine (GKE), Google Container Registry (GCR), Google Container Builder (GCB) and several other GCP services.
Naturally, the customer wants the best of all worlds. The ability to manage these environments from a simple, single dashboard. So they decide to run Cloudify on Google Compute Engine and use the Cloudify AWS plugin to orchestrate their EC2 VM cluster while simultaneously using the Cloudify GCP and Kubernetes plugins to manage their Kubernetes workloads, replica sets, load balancers and other GKE items on Google Compute Platform.
Additionally, the service provider customer has requirements to take runtime properties and deployment outputs from the AWS EC2 VM cluster and supply them as inputs into the GKE workloads and services. And vice versa is also true, where runtime properties and deployment outputs need to be injected into the older AWS EC2 VM cluster from the GKE environment. Cloudify is instrumental in bridging this gap between the clouds and acting as a “cloud broker” to allow services and functions to run seamlessly in a multi-cloud environment.
You can see this in action in this great Kubernetes Multi-Cloud (GCP and AWS) Deployment video demo:
Use Case: Cloudify extends ONAP Operations Manager services
The ONAP Operations Manager (OOM) project, which is essentially comprised of Kubernetes and Helm, serves as a good example of an orchestration challenge of a complex Telco system that is comprised of a dozen microservices that are interlinked to deliver network services automation.
In this use case Cloudify extends Kubernetes and Helm and adds both the infrastructure provider, to manage the resources for Kubernetes, as well as the top level service orchestration which allows users to connect services that run without Kubernetes and non-Kubernetes services.
Use Case: Orchestrating Kubernetes at the Edge
Kubernetes is relatively light container orchestrator. The fact that it can fit into bare-metal devices makes it suitable for handling edge devices. In this specific customer case, we used Cloudify as a central orchestrator that is responsible for managing many remote kubernetes clusters that are either closer to the edge or fit into the edge device itself. This is described in more detail in an earlier post titled “The Birth of the Edge Orchestrator.”
Summary of the benefits of Cloudify for Kubernetes users
Cloudify integration with Kubernetes is a good example of how one can use an automation-first approach to fit with the multi-stack/multi-cloud reality.
The benefits of this approach are:
- Multi-Stack, Multi-Cloud – allowing to extend Kubernetes with the outside world
- End-to-End Service Automation – Automate first, migrate later for faster time to market and reduce the risk of migration by moving one service at a time
- Integration with Kubernetes ecosystem – Users can choose and build their own Kubernetes stack with tools such as Helm, management UI, etc
- Single Pane of Glass – Providing common management UI (Cloudify) across clouds
- Security and Application Defined RBAC – Users can control who gets the rights to deploy, view, or create blueprints per tenant/user