If you are familiar with configuration management (aka CM) and automation, you probably know a thing or two about Puppet, and the amazing and rich collection of modules it offers. Puppet Forge contains a wealth of third party modules that enable us to do some pretty nifty stuff with almost no effort. Puppet helps deal with the messy parts of CM, like installing binaries and running installation scripts that are tedious to do manually.
Tools such as Puppet were originally created for IT operations people, that are for the most part infrastructure-centric, and are best suited for setup and maintenance of hosts in a physical data center. Dealing with applications and certainly managing applications on an elastic virtualized or even cloudified environment, brings a set of new challenges despite the agility and other benefits it provides.
Now imagine we can have this goodness coupled with an intelligent orchestration framework for an entire deployment?
In this blog post I’d like to demonstrate how a cloud application orchestrator can complement already existing automation processes powered by configuration management tools, in this case we will demonstrate with Puppet.
I will use the nodecellar application and the popular WordPress content management framework as examples.
This will hopefully provide a good introduction to Cloudify blueprints.
So we’ve seen how Cloudify 3 allows us to easily orchestrate the “nodecellar” application
Read about it Cloudify blueprints here.
With the “nodecellar” example, Cloudify deploys a complex application using workflows that map deployment lifecycle events to bash scripts using Cloudify’s bash runner plugin.
Cloudify’s Puppet integration now makes this pretty easy.
Cloudify 3.0 – Taking Puppet to the Next Level of Orchestration.
Check it out. Go
The synergy between Cloudify and Puppet not only allows you to enjoy the benefits of your Puppet environment, but it also amplifies its usability by introducing unique advantages that will answer the following common challenges involved with configuration management tools:
- Agent Installation: Provision your service VMs, install a Puppet agent (if you like) and wires them up with the Puppet Master. Or, if you choose to run standalone, you can install the agent with the appropriate manifests needed for that service, as well.
- Order of Dependencies: Define the dependencies between application stacks, services and infrastructure resources. Which will then be launched based on that order.
- Remote Execution and Updates: Other than the basic install/uninstall, Cloudify enables customized application workflows that allow you to execute tools like remote shell scripts on a group of instances that belong to a particular service, or to a specific instance in a group. This feature is useful to run maintenance operations, such as snapshots in the case of a database, or code pushes in a continuous deployment model. In addition, you can run
puppet applywhenever you feel it’s right for your service.
- Post Deployment: Once your application is up, Cloudify will be able to glue your monitoring tool of choice, or you can choose to use the built-in one. A robust policy engine, enables auto-healing and even auto-scaling according to your service’s required SLA.
I’m now going to take a deep dive on my experience with a WordPress example that I feel is a very good representation of how Puppet and Cloudify work in sync.
Let’s say we want to deploy the popular WordPress application stack on two VMs .
-server 3.5.1 with the basic following modules installed:
|– hunner-wordpress (v0.6.0)
|– puppetlabs-apache (v1.0.1) – with php mods enabled
|– puppetlabs-mysql (v2.1.0)
Your site.pp file should resemble something like this:
As we can see, we have an Apache PHP application that will likely require a database connection string (IP, port, user and password).
This is where Cloudify facilitates the “gluing” of all the pieces together, by allowing us to inject dynamic/static custom facts to the dependent node (Apache server).
Cloudify supports both standalone agents and PuppetMaster environments.
Step 2: Tweaking the Original WordPress Module.
Some minor adaptations to the wordpress init class of the WordPress module will allow us to embed these facts during Puppet agent invocation.
Below is a code snippet (With defaults truncated):
And some tweaking to the templates/wp-config.php.erb:
Let’s add some tags for finer control of manifest execution:
The MySQL node will not require the application part to run on it, so I’ve excluded it using a Puppet “tag” (read more about Puppet tags).
Cloudify, of course, supports this and will provide the appropriate tags during agent invocation.
Step 3: Creating the Blueprint
In a similar way to the “nodecellar” blueprint, first lets create a folder with the name of “wp_puppet” and create a blueprint.yaml file within it. This file will then serve as the blueprint file.
Now let’s declare the name of this blueprint.
Now we can start creating the topology.
Step 4: Creating VM Nodes
Since, in this case I use the OpenStack provider to create the nodes, let’s import the “OpenStack types” plugin.
Since the VMs are the same, I declared a generic template for a VM host:
Create the MySQL and Apache VMs:
Step 5: Declaring Apache and MySQL Servers
Since we are using the Puppet plugin to create those servers, first we have to import it:
The plugin defines server types as follows:
middleware_server, app_server, db_server, web_server, message_bus_server, app_module.
They are virtually the same, but serve the purpose of enabling better readability for the user and GUI visualization
A Puppet server type is derived_from: cloudify.types.server type, but includes some puppet-specific properties and lifecycle events.
For documentation see: Puppet Types
So we now will go ahead and declare the server types:
Step 6: Instantiating the Apache and MySQL nodes:
Here we provide the Puppet configuration and tags and define the relationships between the nodes. Cloudify’s agent will use those relationships in order to decide the appropriate facts to inject.
Step 7: Upload the Blueprint and Create the Deployment (via CLI or GUI)
Then execute your deployment (via CLI or GUI).