Dynamic Allocation Using Static Constraints in Cloudify Blueprints
Sometimes, when creating complex applications, we might need dynamic allocation of static resources. To explain what I mean by this, it’s best to look at an example.
Single instance example
Let’s say we have a proprietary IP allocation mechanism for our servers in OpenStack. We make a REST call and receive a new IP address we can use in return. But the assignment of an address for an OpenStack port is performed at creation time. Like this:
node_templates: os_port: type: cloudify.openstack.nodes.Port properties: openstack_config: *openstack_configuration interfaces: cloudify.interfaces.lifecycle: create: implementation: openstack.neutron_plugin.port.create inputs: args: fixed_ips:
Usually we’d use an input, like this: fixed_ips: {get_input: fixed_ip_input}, but in our case the IP is allocated dynamically by a third-party mechanism. What can we do? Well, we need to use some other intrinsic function. The most logical candidate is get_attribute. So, the idea now is to have some other node which will hold the IP address in its runtime_properties.
Ok, let’s do this. We’ll create a new node template, and we’ll make sure the port depends on it.
node_templates: ip_address: type: cloudify.nodes.Root interfaces: cloudify.interfaces.lifecycle: create: get_ip.py os_port: ... args: fixed_ips: { get_attribute: [ ip_address, fixed_ip] } relationships: ... - type: cloudify.relationships.depends_on target: ip_address
Ok, there are a few things to remark upon here:
First of all, we’re making sure the port depends on the new ip_address node template (via a depends_on relationship), so that the create operation of ip_address is run in advance of the port’s creation. This should make sure that ip_address already has a fixed_ip runtime property.
Second, we’re setting the fixed IP by using get_attribute.
And finally, notice the create operation on ip_address. It’s calling a get_ip.py script (this can, of course, also be a plugin operation). So let’s go ahead and imagine what such a script might look like:
# get_ip.py from cloudify import ctx def set_ip_in_runtime_props(): ip_address = # The neutron plugin expects a list of dicts with an `ip_address` key fixed_ip = [{'ip_address': ip_address}] ctx.instance.runtime_properties['fixed_ip'] = fixed_ip ctx.instance.update() if __name__ == '__main__': set_ip_in_runtime_props()
And that’s it, the script puts a runtime property, and when the port will be created it will be available for it.
Multi instance + pool example
Now, let’s make things a bit harder for ourselves. Let’s now say that we have a pool of IP addresses we want to use, and to make things more interesting, let’s say we want to have a scaling group. So, instead of a single port, we’ll have three ports (obviously, with servers attached to them). What do we do now?
So, one part of this is easy – if we scale ip_address along with os_port, the get_attribute semantics are well defined (look here for get_attribute Between Members of Shared Scaling Groups). This means that Cloudify will automatically know to access the “correct” node instance (i.e. the node instance that was scaled along with the port). So that’s taken care of, we don’t need to change this part.
The second part is harder. Instead of a comfortable REST call, we are given a static pool of IP addresses. For simplicity’s sake let’s say that this pool is given to us as an input:
ip_pool: - 10.0.0.20 - 10.0.0.21 - 10.0.0.22 ...
Ok, now we need to somehow distribute these IPs – one for each ip_address node instance. Well, there are several ways to do that in Cloudify, but I’ll provide one. We will create a new node template, let’s call it ip_pool. It will receive the whole pool of addresses from the input, and will then dispense them to the various ip_address instances via a relationship. Let’s see how this looks:
node_templates: server: ... # Some generic OS server, connected to os_port os_port: ... # As before ip_pool: derived_from: cloudify.nodes.Root interfaces: cloudify.interfaces.lifecycle: create: implementation: setup_ip_pool.py inputs: ip_pool: default: { get_input: ip_pool } ip_address: derived_from: cloudify.nodes.Root relationships: - type: ip_address_connected_to_ip_pool target: ip_pool relationships: ip_address_connected_to_ip_pool: derived_from: cloudify.relationships.depends_on target_interfaces: cloudify.interfaces.relationship_lifecycle: preconfigure: get_ip_from_ip_pool.py groups: scaling_group: members: [ ip_address, os_port, ..., server ] policies: scale: type: cloudify.policies.scaling properties: default_instances: 3 targets: [scaling_group]
Ok, let’s unpack this blueprint:
First of all, we see the ip_pool node template. It’s a very simple template with a create operation that runs a script that accepts an ip_pool input. This script is extremely simple, so let’s just look at it:
# setup_ip_pool.py from cloudify import ctx from cloudify.state import ctx_parameters as inputs def setup_ip_pool(): ctx.instance.runtime_properties['ip_pool'] = inputs['ip_pool'] ctx.instance.update() if __name__ == '__main__': setup_ip_pool()
Pretty basic – we’re just putting the inputs into runtime properties.
Next, in the blueprint above we see that ip_address no longer has a create operation, but rather it has relationship with ip_pool, wherein during preconfigure we call the get_ip_from_ip_pool.py script. Let’s look at this script now (remember that ip_address is the source instance here, and ip_pool the target):
# get_ip_from_ip_pool.py from cloudify import ctx def get_ip_from_ip_pool(): ip_pool = ctx.target.instance.runtime_properties['ip_pool'] ip_address = ip_pool.pop(0) ctx.target.instance.runtime_properties['ip_pool'] = ip_pool ctx.target.instance.update() # The neutron plugin expects a list of dicts with a `ip_address` key fixed_ip = [{'ip_address': ip_address}] ctx.source.instance.runtime_properties['fixed_ip'] = fixed_ip ctx.source.instance.update() if __name__ == '__main__': get_ip_from_ip_pool()
Now, there are two parts to this script. In the first part, we get the current pool from ip_pool’s runtime properties, we take (pop) one IP out, and immediately update (i.e. write the changes to DB). This update makes sure that the same address won’t be assigned simultaneously to two different instances. If they’ll both try to update at the same time, the second one will get an error, the runtime properties will be updated only once, the whole preconfigure operation will retry, and on the next try a new and untaken IP will be assigned.
The second part of the script is familiar to us – it’s exactly the same as in the previous get_ip.py, except that we get the IP address by different means.
Finally, we have a scaling group and a standard scaling policy.
And that’s it! We’ve just dynamically (i.e. at runtime) assigned static resources – resources that we need to assign (or at least to model) at blueprint creation time.