The Ninth Semi-Annual Openstack Summit, was held last week (12th-16th of May, 2014) at the Georgia World Congress Center in Atlanta. The corresponding release is called Icehouse and design sessions for the next release (Juno) were also conducted.
Attendance was quite impressive with over 6,000 registered and about 100 booths of companies presenting what they can do with Openstack.
Cloudify – Down to the OpenStack Bits & Bytes. Check it Out. Go
In this post i will give a short technical overview of a few interesting projects i think are worth mentioning.
Heat is an orchestration engine designed to easily provision Openstack Infrastructure. It uses its own DSL called HOT (Heat Orchestration Template) that was inspired by Amazon CloudFormation. Here is a very basic example that simply provisions a Nova server with a custom name, image, flavor and key.
This is very easy to use, its completely declarative and contains exactly what is needed. nothing more.
The Icehouse version of Heat was dedicated to software configuration and management. Lets take a look at how it works.
This template defines a ‘wordpress’ component, of type OS::Heat::software_config::chef_solo, which states that this component is configured using a built in chef_solo implementation. Notice the ‘relationship’ part in the component, this is what wires the cookbook in to a specific curtain server. To learn more check out OS::Heat::SoftwareConfig.
For me, the most exciting thing about this, is the fact that it looks like HOT is getting closer and closer to the TOSCA standard. And there is actually collaborative work between the people at OASIS and Heat. This is very good news for the OpenStack community, and in fact for the entire cloud orchestration ecosystem as a whole.
Also see this blog post by Nati Shalom on Icehouse + Heat.
I’d also recommend you check out this post, as well – Juno Design summit : Heat – Software Orchestration.
Although Neutron is much more flexible and feature-rich than its predecessor, Nova-network, there’s recently been some controversy around its viability as far as large scale deployments go. (See: http://www.theregister.co.uk/2014/05/13/openstack_neutron_explainer). This is mainly explained by the fact that Neutron was initially created by the folks from Nicira, and was therefore optimized for Nicira rather than the default Open vSwitch implementation.
This is the reason that Nova-network is still not deprecated, and can be used as an alternative to Neutron in Icehouse as well. There were numerous sessions during the summit dedicated to Neutron deployment, scaling and operational aspects. Overall it seems that despite the current hardships, in the long run Neutron will become the primary networking infrastructure for OpenStack, and that the recent push from large vendors such as HP and RedHat will get to where it needs to be. If you want to get an overview about how Neutron compares to the older nova-network module read this post from our very own Noa Kuperberg (Navigating through Openstack Networking – The Definitive Breakdown).
Mistral is kind of the new kid on the block, and actually, its purpose is to provide something we haven’t seen before in the OpenStack community, Workflow-As-A-Service.
A workflow is usually a set of imperative statements that consist of actions and conditionals. (i.e: if ‘x’ performs ‘y’ otherwise perform ‘z’).
So for example, when the Heat engine processes a template, it uses a workflow that analyzes each OS::Heat::Server element (by condition) and calls the Nova API (an action) to provision a VM.
A joint effort by Mirantis and StackStorm has produced a very simple and declarative DSL for defining workflows, that can be used to basically orchestrate anything.
So, a workflow is in this case a list of tasks, executed as soon as it’s possible to do so. We can see here that that the ‘backup_user_data’ task requires the ‘put_service_on_hold’ task, this ensures the engine will execute the tasks in the correct order. In addition to dependency management, Mistral offers a lot of other features that can all be expressed in a simple declarative fashion. Some of which are:
- Retry mechanism
- Result publishing
- Error handling
See some more examples here.
As a developer on the Cloudify team, I can’t help but really feel like we’re on the right track to contributing significant tooling/code to the OpenStack ecosystem. What we’re doing with Cloudify is something that can be likened to to a combination between Heat and Mistral. Basically, an orchestration tool designed and architected for the cloud.
Where I find it fits into the OpenStack community, is providing users with the ability to customize both orchestration templates, and orchestration workflows, using their own DSL, which is, for the most part, completely aligned with TOSCA.
Cloudify actions (or ‘operations’) are performed by plugins using a distributed task execution engine. A nice set of built it plugins already exists that support:
- Chef Recipes.
- Puppet Manifests.
- Custom Python Code.
- Bash Scripts.
- Python scripts.
Writing a plugin is an extremely easy process and many more plugins should be available by the developers or the community soon. You can of course even write your own!
Let’s take a look at a sample blueprint.
Again we see a YAML, TOSCA-based declarative DSL. This blueprint defines two Openstack IaaS elements:
- Floating IP – Taken from the Ext-Net IP Pool.
- Server – Custom name, image, flavor, key and security groups.
The thing worth mentioning is the ‘relationships’ section defined in the server node. Remember we already saw the exact same thing in the HOT template, so this should already make sense to you (having a standard is great isn’t it?!). This relationship acts on two planes, it makes sure the ‘floatingip’ node is created before the server node, and it also actually connects the floating IP, that was created, to the server.
Cloudify provides native support for everything OpenStack, this means you can easily provision any kind of IaaS on OpenStack without writing a single line of code. More importantly, you can use this mechanism to do software configuration very easily.
For a complete example see the Cloudify Nodecellar Example.
Ansible has been around for quite a while, but my impression from the conference is that its really picking up momentum and the community is growing.
It is a simple way of automating cloud application deployments using an agentless (and therefore, more secure) architecture. Basically, it just uses SSH to connect to VMs and perform commands.
Ansible blueprints are called PlayBooks, and they are written in yet another Domain Specific Language.
Lets see an example:
This is a snippet taken from the Tomcat installation PlayBook.
So what do we have here? First, we see a command to start Tomcat using the ‘service’ module. Next a command to insert iptable rules by using the ‘template’ module. A nice feature here is the ‘notify’ part, basically it means that once this command is executed successfully, the ‘restart iptables’ command will be triggered (it is defined prior to this snippet). Finally, we wait for the Tomcat service to start by port knocking using the ‘wait_for’ module.
Modules (very similar to Cloudify’s plugins) are what Ansible uses to execute tasks. The great thing about Ansible is the amount of modules out there, you can do so much fairly easy just by taking advantage of the large community.
For a complete list of ansible modules see Ansible Modules
The Heat-translator project is an IBM initiative to provide a translator from various types of DSLs, to the HOT language. Currently, the only development path is translating TOSCA templates into Heat ones. It consists of two parts that can be used separately.
- Read TOSCA into an in-memory graph, describing the orchestration blueprint.
- Deserialize into a file in the form of a Heat template.
This project is just taking its first baby steps, but it seems to be gaining traction with the OpenStack community. An important by-product of this project is the collaboration of Heat and TOSCA, making it a lot more simple to reach a point where TOSCA and Heat are indistinguishable from one another.
Also see Junno Design summit : Heat-translator Etherpad.
My general impressions from this summit are two-fold, both on a personal and technical level – the community itself is great, and that people are doing some really amazing and useful stuff.
- It is now much easier to provision Openstack infrastructure, and starting with the previous Havana release, Software-Defined-Networking also entered, and is readily available. The Icehouse and Juno releases will provide more functionality and ease of use in this regard.
- Projects like the one’s mentioned above are simplifying (and automating) by an order of magnitude the process of configuring and installing software components on top of this IaaS.
- Even the process of deploying OpenStack on your own environment has taken a quantum leap, with a handful of companies specializing in this field and making life much easier. Take a look at this nice example of easily installing OpenStack on the Softlayer public cloud! (it’s like a babushka of clouds…)
But, when I first heard of OpenStack, I thought:
“Wow! , this is great! not only do we have an open cloud infrastructure, but this should, and will evolve into an open standard, that will eventually be embraced by all cloud providers. This will make it possible to easily port applications from one vendor to the other, and therefore effectively kick up a notch the concept of an abstraction layer.”
When I was thinking about an open standard, what I had in mind was similar to what TOSCA is doing for orchestrating applications for the cloud, and to what the JSM Specs did for the Java community. Basically, a set of APIs and services that each cloud provider should provide the user with, in a consistent manner.
It looks like this is not happening, and OpenStack is, in reality, becoming the de-facto cloud provider, and may actually in a way, increase the provider lock-in problem to a certain extent, instead of eliminating it.
It is becoming a synonymous to cloud, and is the main focal point of many leading technology companies, that causes it to act as a suction drain for a lot of projects that could have otherwise been extremely beneficial to other vendors.
Don’t get me wrong, I am a big fan of the OpenStack project and the community. I think something amazing evolved here which is nothing less than a cultural and technological revolution.
That said, I do believe certain opportunities may have been missed, and I think the focus is somewhat misguided, however, with the strong ecosystem and community, I believe OpenStack is at the very least positioned to correct this and take advantage of the opportunities at hand, and pretty much change the way the world does cloud.
But hey, that’s just me…