Deployment Composition In Cloudify

In Cloudify, “deployments” define an isolated namespace that contains a collection of nodes and relationships. These nodes and relationships are typically visualized as a complete “stack” of technologies, that deliver a complete platform for computing. An example is a classic load balancer, web servers, app servers, and database stack. In some cases, it is desirable to have a these islands *not* represent a complete stack, but a portion of a stack (a tier for example).
In this model, a database deployment (for example) can be instantiated independently from other tiers. The other tiers can come and go independently of the database. Cloudify has no built in capability to express such a model, but the flexible plugin architecture makes it rather simple to do so.



Cloud Orchestration to the Max. Download Cloudify today.  Go


Quick Walkthrough

The DeploymentProxy node lets you set up a startup dependency between deployments. A DeploymentProxy node is inserted in the dependent blueprint, and is configured to refer to the outputs of the independent blueprint, or more precisely, the independent deployment. The source for the plugin is on github, and includes an example. The example demonstrates a NodeJS blueprint that depends on a MongoDB blueprint. The details of the dependency are somewhat contrived, but good enough for a demonstration.
The DeploymentProxy uses the blueprint “outputs” feature as the integration point. So in this example, the first step is to establish meaningful outputs in the MongoDB blueprint.

Once the outputs are established, all work moves to the dependent blueprint (NodeJS), that contains the DeploymentProxy node. To begin with the NodeJS blueprint includes the plugin definition and the TOSCA node definition for DeploymentProxy.

Next, the DeploymentProxy node itself is added. The DeploymentProxy node represents the independent blueprint (MongoDB) in the NodeJS blueprint. Its only function is to be used during the built-in install workflow to wait for (if necessary) and provide info about the referenced blueprint/deployment.

This particular node demonstrates a python boolean expression being used to determine when the proxy will return successfully during the install workflow. In other words, the NodeJS install will wait for that condition to be true, or time out. The expression is supplied the “outputs” dict of the target deployment. The other kind of condition is “exists”, which returns successfully if the named property exists in the outputs.
The last step is to connect, via a relationship, the NodeCellar application to the MongoDB database represented by the proxy. Beyond simply waiting for MongoDB to be available, the example also demonstrates accessing the outputs in order to connect to the database. The DeploymentProxy node returns the outputs from its target blueprint in its runtime properties.

In the “node_connected_to_mongo” relationship, slightly modified from the original version in the standard NodeCellar blueprint, the postconfigure lifecycle method gets the MongoDB host and port. In the original version, it gets the values from the MongoDB nodes that are in the current blueprint. In this version, since MongoDB has a completely separate blueprint, it gets the host and port from the proxy node. This is shown in the NodeJS blueprint in the relationship implementation at /scripts/mongo/set-mongo-url.sh.

A Little Deeper

The plugin has but a single implementation function, “wait”, that waits for conditions on the outputs of the target deployment. When the “start” method is invoked, “wait” receives the following parameters:

  • deployment_id : the deployment to depend on.
  • wait_for: either “exists” or “expr”.
    • If “exists”, will wait for an output matching the value of property “test”.
    • If “expr”, it interprets property “test” as a python boolean expression, in which the collection “outputs” is the outputs dict (e.g. expr: outputs[port]>0
  • test : either the name of an output, or a boolean expression (see wait_for)
  • timeout : number of seconds to wait. When timeout expires, a “RecoverableError” is thrown. Default=30.

The “wait” function calls the Cloudify REST API to get the outputs from the configured deployment id. It either checks whether a specific output property exists, or evaluates a supplied python boolean expression to check more complicated conditions. If an expression is configured, the dict “outputs” containing the target deployment “outputs” dict is in scope when the expression is evaluated. The function attempts to satisfy the condition for “timeout” seconds, at which point a “RecoverableError” is raised. This causes the Cloudify install workflow to enter it’s own retry loop. This continues until the install workflow finally gives up, or the expression evaluates as true. When the DeploymentProxy completes, it copies the outputs of the target deployment into it’s own runtime properties. This allows other nodes in the containing blueprint easy access to the outputs where, for example, a server IP address and port
might be located.

Conclusion and Future Directions

The cloudify.nodes.DeploymentProxy node provides a basic dependency mechanism between deployments. It masquerades as a local deployment node, while accessing another deployment, waiting for a ready state described by its outputs. This is just the tip of the iceberg for this concept, as the communication is limited to outputs and is uni-directional. There is no reason, in principle, that this plugin couldn’t be extended to actually trigger the install of the target deployment, access and expose runtime properties, and update outputs and other properties continuously. The source is available on github, along with the usage example from the walkthrough in this post.

comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top