Cloudify Service Binding For Kubernetes

This is the third post in a series about exploring the integration of Cloudify with Kubernetes via the Service Catalog feature. The first post explored the foundational concepts and set the stage for a project to develop a service broker for Cloudify. The second post described the architecture of the project, and provided a basic capability to list and provision services from and to a Cloudify server. The last major missing feature in the project was service binding. Service binding is the process of providing connection information for services that require it, which most do. Starting and connecting to a database service is a common example. This post describes the details of service binding and the implementation of it in the service broker project.

Try the Free Multi-Cloud Kubernetes Lab!   

Service Binding Concepts

Well that seems simple enough. Create a resource file in Kubernetes and it generates a request to the service broker. The service broker gets the credentials for the underlying service and returns them. After that, the binding requester on Kubernetes can use the credentials to connect. Note that the binding doesn’t actually create a connection, it just returns whatever information is necessary (e.g. credentials, a connection URL, etc…) back to Kubernetes. That binding request can be quite simple:

kind: ServiceBinding
  name: maria-binding
  namespace: test-ns
    name: mariadb-instance
  secretName: mariadb-credentials

The section of interest here is the spec section. There the service instance target for the binding is specified by instanceRef/name. Also here is the secretName field. secretName is an arbitrary secret where the service catalog controller will stuff the result coming back from the service broker. Once the credentials are in the secret, it can be passed by familiar methods to containers that have a need for them.
Now this process is indeed a simple handshake to the service broker, and a simplistic demonstration would probably just hard code a response to avoid to pain of what the rest of this post is about. In the interests of a more useful exploring, I’ve gone a bit further, and developed a configurable method of binding that is at least a step towards a real world application. To do this real binding, I was going to need some help. And for that help I turned to Hashicorp’s Vault.

A Brief Intro To Vault

Vault is a secret store. It’s create for storing and retrieving secrets securely, managing the lifespan of stored information (lease & renewal ), revoking secrets, and serving as an in-memory encryption engine. But as Cloudify already has a secret store, why Vault? Because Vault also provides the ability host secret engines that can generate and manage secrets for services. An elegant solution for the service broker is to have Vault generate credentials when binding is requested. For example, Vault can generate credentials/users for an underlying database according to a policy that limits the lifespan of the credentials, and then delete the credentials upon lease expiration. Just what the doctor ordered.
Vault stores secrets using a path concept, similar to a conventional file system. The access to each path is controlled according to Vault credentials. If I want store a secret in vault, I can simply write it to a path like secrets/my-secret. Besides reading and writing secrets, Vault also uses the secret store to storing configuration for things like secret generation. To actually perform secret management functions, Vault includes the concept of a secret engineSecret engines implement different schemes for different secret management capabilities. For example, the kv (i.e. key value) secret engine responds to read/write/delete operations by encrypting/decrypting and storing or deleting the desired key/value. On the other hand, the database secret engine generates secrets dynamically by creating them on the underlying database. When you write to a database secrets engine, you only write to configure it. When you read, you get generated credentials back.


In an effort to not have the project completely married to Vault, the Binder concept was born. The idea is to enable pluggable secret stores/generators. A binder is an class the implements an interface/ABC:

class Binder():
  __metaclass__ = ABCMeta
  def connect(self, creds):
    """Connect to a secret engine.
       Implemention logic is optional if lazy approach is preferred.
       Implementions should store connection information
       No return value
  def configure(self, config, outputs):
    """Configure a binding
       If required by the engine, configure the target service.
       "config" is the secret engine specific configuration for the service.
       "outputs" is a dictionary of values from the Cloudify deployment
       that can be used to enrich the configuration.
       No return value
  def get_creds(self):
    """Retrieve credentials from the engine
       Returns a dictionary of credentials for return to K8S.

In the service broker, the vault binder is the default (and only) implementation. The only reason to explain this detail, is that understanding the binder interface makes the process of setting up a service for consumption by the service broker make sense.

Preparing A Service For Binding

In order for binding to actually work, some configuration is necessary both in the binder (aka Vault), and the service broker itself. I’ll describe the process of configuring a MariaDB database for consumption. All work is OpenStack oriented, however the concepts apply universally.

Step 1: OpenStack Network

The first step is to run the install workflow on the OpenStack network blueprint that is included in the examples/mariadb directory. To make your life easier with these instructions, name the deployment ‘openstack-network’.

Step 2: MariaDB Blueprint

Now that the base network has been created, we can prepare a blueprint for service broker provisioning and binding. For binding in particular, outputs need to be configured that will expose whatever deployment specific information is needed for binding. In this case, exposing the public IP of the server is sufficient. Included in the project examples is a MariaDB blueprint that exposes a single output: mysql_ip.

    description: Cluster Addresses
    value: { get_attribute: [ mysql_ip, floating_ip_address ] }

That output will be consumed later by the binder to provide credentials to the Kubernetes resident consumer. Name the blueprint ‘mariadb’ for the sake of this guide.

Step 3: Start and configure Vault

If you have a Vault instance running, you can use that. It is very easy to download Vault and run it yourself, especially in dev mode. If you don’t want to do that, the examples directory has a Vault blueprint that installs Vault on OpenStack and runs it in dev mode. You can install it simply on a Cloudify manager with the command cfy install -b vault -d vault vault/openstack.yaml.
To configure vault, some utility scripts have been provided. The first, 1-enabledb enables the database secret engine on Vault.
Next, 2-config-secret write a special secret needed by the Vault binder to configure secret generation once the database is up and running. The value of the secret is a JSON encoded object that includes the Vault specific role configuration ( beyond the scope of this article, see the mariadb secret engine doc for details), plus some extras. The first extra is a special token delimited by ‘{{}}’. The binder will see any string enclosed in ‘{{}}’ as a dict expression from Cloudify outputs. The other things of interest are the special keys ‘path‘, ‘credpath‘, and ‘credoutputs‘:

  • __path__ tells the binder where to write this configuration in Vault.
  • __credpath__ tells the binder the secret to read from to generate the credentials.
  • __credoutputs__ tells the binder to add the named Cloudify outputs to the credentials`

Finally, [3-setrole] creates the role entry in Vault. Vault roles determine the permissions on the database for the purposes of credential generation. In this example, receivers of credentials are given access to a specific test database that was created when the MariaDB blueprint was installed.

Step 4: Start the broker

Since the last post, a few additional command line arguments have been added:

  • --binder the binder to use. The default (and only) is ‘vault’
  • --binder-creds the credentials for connecting to the binder. These will be binder specific. In the case of the vault binder, the format is token@url. If using the example Vault blueprint, the token is just “root” and the url is of the form “http://:8200”.

Broker command line: python --host <cloudify_manager_ip> --port <default 80> --tenant <cloudify tenant, default 'default_tenant'> --user <cloudify user> --password <cloudify password> --binder <default 'vault'> --binder-creds <vault creds (describe above)>

Step 5: Update the broker service config

When the broker starts, it creates a SQLite file cfy.db. The blueprints table in this file is synchronized asynchronously with the Cloudify blueprint catalog. If all is well, you should be able to see the mariadb blueprint appear in the database after a few seconds. To verify, you can run sqlite3 cfy.db 'select * from blueprints'. A log file, sbroker.log is created and actively updated. Look for any errors there if mariadb can’t be found.
The only update needed for the service config database is the associated binder and binder config for the service. The example has a script to assist with this update. Run configdb cfy.db cloudify/binder/mariadb mariadb to update the database.

Deploy MariaDB and Bind to it

If you haven’t already, install the service catalog extension into your Kubernetes cluster. Once installed, you point to the Cloudify service broker. In the example, the broker.yaml first must be customized to use the IP address of the service broker, then run kubectl create -f broker.yaml.
To deploy MariaDB, use the mariadb.yaml to create the instance. Run kubectl create -f mariadb.yaml. You will see something like the following pop up in the Cloudify deployments view:

You can watch Cloudify for completion, or watch Kubernetes via the kubectl get -f mariadb.yaml -o yaml command, looking for the following status:

  asyncOpInProgress: false
  - lastTransitionTime: 2018-06-29T05:53:03Z
    message: The instance was provisioned successfully
    reason: ProvisionedSuccessfully
    status: "True"
    type: Ready

Once the service is available, you can bind to it by using the binding resource in the example like so: kubectl create -f mariabinding.yaml. Completion should only take a few seconds, watch the resource in Kubernetes via kubectl get -f mariabinding.yaml -o yaml:

  asyncOpInProgress: false
  - lastTransitionTime: 2018-06-29T06:27:09Z
    message: Injected bind result
    reason: InjectedBindResult
    status: "True"
    type: Ready

Recall that the binding defined the target secret name as mariadb-credentials. Grabbing the secret from Kubernetes shows the binding created, both in Kubernetes and the database itself kubectl get secret mariadb-credentials -n test-ns -o yaml:

  mysql_ip: MTAuMjM5LjAuODQ=
  password: QTFhLXpRVXVmVU55Qkpua01iY3E=
  username: di10b2tlbi1teS1yb2xlLWFJY0xWR2ZsRXRPNnU0T3c=

Pipe to base64 -d for actual values. The binding is now available for use in Kubernetes.


This post reviewed the implementation of a service binding implementation for a Cloudify targeted service broker. It uses Hashicorp Vaults’ secret store and secret generation capabilities. This was one of the major remaining features for the service broker, and a necessary feature for any reasonable production implementation. Hopefully the journey has been as educational for you as it has for me. The source is available on Github. Comments welcome.

You may be interested in reading:

Docker vs. Kubernetes

Cloud native orchestration and multi cluster management


    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top