VM-less Multi-node Coordination With Cloudify

Cloud Orchestration Tools | Cloud Application Orchestration | Cloud Orchestration Tools | DevOps Orchestration | Cloud Automation | Cloudify

Not infrequently, a blueprint needs to execute logic that is used to either coordinate the activities of other nodes, act essentially as a thread gate, or both. In practice, this means executing some code that doesn’t, beyond the install workflow, have any bearing on the runtime operation of the system. This post discusses some simple techniques that are useful for such scenarios.

Orchestrate the lifecycle of your apps in the cloud with Cloudify.  Get it!

VM-free Plugin Execution

It helps to get right to a concrete example to understand the problem domain. Imagine a MongoDb database blueprint. I want a blueprint deployment to run once on my network, and have other blueprints make use of it independently. A previous post of mine discusses a component that can make this connection <ref previous post>, but here I’m concerned with installing and starting a multi-node database, and when done, exposing the information needed to connect to it in the blueprint outputs.

The most natural way to achieve this in Cloudify is to have a blueprint node that has a “depends_on” or “connected_to” relationship to the MongoDb VMs. In this “coordinator” node, I want to gather the IP addresses of the MongoDb servers, and place them in “runtime_properties” so they can be publicized by the blueprint outputs. The problem is, I don’t want to actually start (and potentially pay for) a VM just to perform what amounts to a simple scripting task.

The solution is to create a node of type “cloudify.nodes.Compute”, with its “install_agent” property set to false. This will create a scripting environment on the Cloudify manager, but not start a VM, which is just what we’re looking for. It is important to note that if you use the script plugin, or any plugin, you must ensure that the executor is set to “central_deployment_agent”.

An example:

Multi-Instance Information Harvesting

When relationships are executed in Cloudify, they are always between two nodes. In our example Mongo Db case, we might have 10 or more node instances with a relationship to our coordinator node. Recall that one of the goals of the coordinator is to create lists of IP addresses that can be published via blueprint outputs. Since relationships only operate on pairs, my relationship code will be executed once for every pair; if there are 10 “mongod” nodes, there will be 10 different calls. This makes assembling a list somewhat annoying.

The approach I’ve taken is to, in the relationship code, store each connected node’s IP in a specially prefixed property (e.g. “mongod_host_”), and adding the instance id. So, after all the relationships are executed, the “coordinator” might have properties that look like:

mongod_host_14711, mongod_host_14712, mongod_host_14713, ….

Each of these would have a value equal to the IP address. Example of a relationship script setting these:

Now that our coordinator nodes have these runtime properties set, I need to iterate through them to create a list for publication. This is simple using the Cloudify context supplied to the “start” lifecycle method of the coordinator node:


Coordinating the activities and startup sequence of multiple nodes is a fundamental part of application orchestration. Hopefully these examples will help with some of the specific issues that arise when creating complex blueprints.


Leave a Reply