Kubernetes From The Inside Out
This post is part 1 of a 3-part series. You can find part 2 and part 3 linked here.
Kubernetes is famously known for having an opinionated architecture; most notably container orientation, networking idioms, and declarative orchestration. As Kubernetes development has progressed, an increasing number of features have been added to accommodate and integrate with the very unopinionated (or at least diversely opinionated) world that surrounds it.
Back in late 2015 when I wrote the initial Cloudify to Kubernetes 1.0 integration for Cloudify, Kubernetes was narrowly focused on container orchestration. In those days, efforts were devoted to instantiating and operating Kubernetes, in other words, orchestration “from the outside in”. These pursuits are still valid, and Cloudify continues to refine related features. However, much has changed. While container orchestration of course remains the primary focus, many integration/extension related features have been added that greatly expand the possibilities for automation. I’m focusing on extensions that facilitate the consumption and coordination of services external to Kubernetes from the inside out. This is the first of a series of blog posts that explore a few of the more promising integration pathways from the Cloudify perspective.
Try the Free Multi-Cloud Kubernetes Lab!
Introduction To The Service Catalog
An intriguing extension, and one that is a natural for Cloudify, is the service catalog. The service catalog extension provides access from applications running in Kubernetes to arbitrary external services, via a “service broker”. A service broker is a service that could be running in or out of Kubernetes that provides a REST API to Kubernetes as defined in the Open Service Broker API.
The Open Service Broker API defines a generic mean of exposing a service catalog or marketplace to Kubernetes. The service catalog concept as defined by the API supports describing service capabilities, bindings, and service variations called “plans”. Plans define a particular service configuration and an optional related cost. Once Kubernetes is linked to the broker, services can be requested via the
clusterServicePlanExternalName are provided to Kubernetes via the broker API, and accessed by an operator via a
kubectl command. Architecturally, the picture is as follows:
To implement this architecture for Cloudify, the service broker must be implemented with the proscribed Open Service Broker northbound interface, ultimately mapping to the Cloudify REST API on the south.
You may be interested in reading:
Cloud native orchestration and multi cluster management
Part of the intent of this blog series is not just to talk about cool stuff, but to apply it in a real world use case. To implement a service broker for Cloudify, we need to address the “impedance mismatch” between the the broker API and the Cloudify API. Ultimately, this means addressing the different ideas that each of these system has of what a “service provider” is. To the Open Service Broker API, the service provider is a marketplace with certain semantics.
Cloudify, on the other hand, is a service orchestrator that addresses automation but is not a marketplace. While Cloudify could certainly be used as an orchestration backend to a full featured SaaS billing and usage tracking system, for simplicity’s sake we’ll limit the project to simply providing service instantiation and binding to Kubernetes hosted apps. Such a use case still has value for Kubernetes application developers that want to consume external services natively, and Cloudify can still provide auto healing, scaling, and multi-cloud goodness to those external services.
Abandoning marketplace/commerce related features in the implementation does not completely close the impedance mismatch discussed earlier. Services/blueprints in Cloudify are stored with less metadata than the broker API requires/desires for example tags, permissions, and specific binding information. Since this is an exercise that doesn’t involve modifying the Cloudify architecture or database, we’ll need some local broker state to associate the additional info with that provided by Cloudify.
We’ll write the broker in Python 2.7, which fits with Cloudify’s implementation. Ultimately, it could make sense to deploy the broker as a microservice in Kubernetes, but to reduce ceremony during this effort it will be a Python module. The broker service catalog will be built from REST interactions with an associated Cloudify server, and adorned with supplementary metadata via a CLI utility (for simplicity’s sake). Likewise, service instantiations will be delegated to the backing Cloudify server.
Recent innovations in Kubernetes have opened to the door for some interesting integrations with services beyond the cluster boundary. The
Service Catalog extension is a natural fit for Cloudify, and provides the ability to for Kubernetes resident apps to consume external services in a native way. Stay tuned for the next episode, where I’ll describe the ins and outs of the initial implementation along with code.