As part of my Cloudify discovery – I’ve encountered another topic which I think needs slight backtracking, and a bit more of an introduction, to get to the more deeper dive of the documentation. So in this series of posts I’m going to dive into what is a workflow?! And then how to install them, and then ultimately how to scale them.
So, I’m going to start from the very beginning, and you’re welcome to start your reading wherever you are in the story.
Cloudify workflows out of the box – at any scale. Go
So for example, a node, can be a virtual host, a security group, an application, you get the idea.
Two nodes may be connected through a relationship, such as a virtual host that is contained_in a security group, or an IP address that is connected_to an interface.
While, this description makes architecture look static, orchestration implies dynamism – and this is where workflows come into the mix. Cloudify’s workflows allow otherwise static nodes, and relationships, to have programmable behaviors.
In an earlier blogpost, I referred to Cloudify’s built-in workflows, which provide the framework for how to make infrastructure flexible, mutable, “orchestrate-able”.
Take, for example, the Cloudify Manager Bootstrap process.
In this scenario, we have many nodes – security groups, key pairs, virtual hosts, etc.
But, the central component, the raison d’être (fancy wording for basic premise) – of the entire thing, is the Cloudify Manager, which is the hub of all our orchestration efforts.
This manager application is a node and it is contained_in and connected_to, either explicitly, or through inheritance, to various other nodes.
Workflows do the connecting and determine when each operation is performed.
When we bootstrap a manager, the install workflow makes sure the manager is installed on the right server and that it happens at the right time.
This workflow contains a specific order of operations. Here are a few:
- Create operations for each of the nodes in the blueprint
- Pre-configure operations for each of the relationships in the blueprint
- Configure operations for each of the nodes
- Post-configure operations for each of the relationships
- Start operation for each of the nodes
When you are writing a blueprint, you need to decide when you want certain operations to execute. Truthfully, this can be an intellectual exercise if you want it to be. You might consider combining certain tasks. You might have dependency loops. And so on.
Here are some basic rules:
First, make a list of the broad steps that you would need to set up your infrastructure.
For example, when we bootstrap a manager, we need to:
- Create the manager security groups and key pair,
- Create the virtual host,
- Install the manager software on the virtual host,
- And, finally, attach the IP address.
For now, let’s assume we have the security groups and key pairs.
When we bootstrap the manager, we need to make sure that two things happen in a particular order:
- The virtual host must be up and running.
- The manager must be installed on the virtual host.
The contained_in relationship illustrates the correct relationships of this scenario.
Notice the manager definition in the manager bootstrap blueprint, which I’ve abridged (for conciseness):
See: Simple Manager Blueprint in the Cloudify Manager Blueprint repository.
The contained_in relationship will make sure the manager_host node is up and running before anything else happens.
The contained_in relationship, and the start operation illustrate the power of the workflows.
Remember earlier we discussed the order of operations in the install workflow. The create happens on the node before the pre-configure on the relationship.
If the bootstrap operation was mapped to the create or configure operation, the install workflow would perform operations related to the contained_in relationship after the manager bootstrap took place.
In other words, the relationship wouldn’t be ready, the manager wouldn’t know where to install, and our workflow would end in an error. So that is why the bootstrap operation is mapped to the start operation.
This is a simple illustration of using workflows.
So before this becomes overload – I’ll stop here, and let you process how the operations works, and the order of dependencies through this example.
In my next post, I’ll dive into a more complex scenario and demonstrate how to install Docker on multiple nodes – and in the final post of the series I’ll dive into how to scale both the simple and more complex workflow.