The TOSCA Cloud State of the Union

 

Cloudify 3.1 | TOSCA | Oasis TOSCA | TOSCA Cloud | TOSCA Orchestration | Cloud Orchestration

 

As a core committer in the Heat Translator project – I have had the opportunity to learn quite a bit over the course of the last year about TOSCA.

When we started developing Cloudify 3.0 (about a year and a half ago) we knew we needed an improved syntax for describing cloud applications and services from what we had  been using in our 2x versions.  We initially thought to invent a new language based on the YAML syntax for describing cloud app services, but thought we likely ought to look for existing standards – instead reinventing the wheel.

And that’s when we found TOSCA which we thought had great potential.

OpenStack Heat was initially launched as an alternative to AWS CloudFormation, that is a closed source project. OpenStack Heat, one of the largest projects to adopt TOSCA as one of the main templating languages, chose TOSCA as it provided the benefit of standardization of templating for cloud orchestration projects, and this, in fact was a major driving force behind TOSCA being adopted by larger projects.



Cloudify 3.1 is with TOSCA out of the box. Give it a whirl..  Go


TOSCA has been gaining momentum, as a result of the need to create an alternative and agnostic syntax that is freely available that enables you to  describe more advanced functionality scenarios specifically around complex topologies and applications, in a standardized way.

The only catch for us with adopting TOSCA initially was that it was written in XML, which was a bit too cumbersome for what we were looking to do.

Aside from this, we found the TOSCA specification to be very much aligned with what we were looking to achieve, and decided to set to work on transforming it to YAML, as our team agreed that that would be the language that would best help us get the job done.  To keep a long story short, at the OpenStack Hong Kong Summit in 2013 we met those who were the active contributors in the TOSCA committee and they told us about their intentions and initial work on converting TOSCA to YAML. It seemed like we shared the same ideas and decided to meet after the summit, and that was when joined the TOSCA committee.

At the same time we also started contributing code to the OpenStack heat-translator project. The heat-translator project’s main goal is to take a TOSCA template and translate it to an OpenStack Heat template, which was very aligned with the CloudFormation syntax in its early stages.  However, when TOSCA was converted to YAML it was quickly adopted into this project.  Heat was primarily infrastructure focused early on, however as the project grew and evolved and started moving up the stack, it needed a syntax that would take into account more complex scenarios and topologies for cloud apps.

An interesting fact is that the Heat project agreed to have the heat translator project integrated within it, which de facto, established TOSCA as their primary templating language.  The moment it was integrated into the official project, this was an indication by the OpenStack Foundation was moving towards TOSCA standardization for the entire project.

Cloudify 3.1 supports most of the TOSCA spec, and we intend to cover more areas in the spec in our next versions.  One of the coolest features introduced in Cloudify 3.1 is the ability to run a TOSCA template locally (that is on your own laptop or PC) without the need of a cloud of any kind. This makes it very easy for people who want to play around with the spec, and learn about it as well as learn about Cloudify.

The basic gist is that ordinarily what the nodecellar app does is install a NodeJS app with a Mongo backend. However, this is example is usually run when you have a Cloudify Manager installed, and for the most part in a cloud environment.

This ability to run the TOSCA spec locally basically enables you to test your blueprints and plugins locally, for the developers among you, and simply use Cloudify almost like a configuration management tool.  We have also taken to using this local workflows feature to bootstrap the Cloudify Manager – as it simplifies everything immensely.

You can check out how this would look in action below.

# Install Cloudify

  1. Use an Ubuntu machine.
  2. Make sure you have gcc and python-dev installed.
  3. Run: pip install cloudify

# Download The Example

Download Cloudify’s NodeCellar example from:

https://github.com/cloudify-cosmo/cloudify-nodecellar-example/archive/3.1.zip

# Unpack…

cd cloudify-nodecellar-example

# Initialize

`cfy local init -p local-blueprint.yaml`

The cfy local init command initializes the current working directory to work with the given blueprint.

After this stage, you can run any workflow the blueprint contains.

# Install NodeCellar

In order to install NodeCellar, lets execute the `install` workflow:

`cfy local execute -w install`

This command will install all the application components on you local machine.

(don’t worry, its all installed under the `tmp` directory)

Once its done, you should be able to browse to [http://localhost:8080](http://localhost:8080) and see the NodeCellar application.

The output would look similar to this:

2014-12-18 10:02:41 CFY <local> [nodecellar_31177] Starting node

2014-12-18 10:02:41 CFY <local> [nodecellar_31177.start] Sending task ‘script_runner.tasks.run’

2014-12-18 10:02:41 CFY <local> [nodecellar_31177.start] Task started ‘script_runner.tasks.run’

2014-12-18 10:02:42 LOG <local> [nodecellar_31177.start] INFO: Executing: /tmp/tmp1hAdu3-start-nodecellar-app.sh

2014-12-18 10:02:43 LOG <local> [nodecellar_31177.start] INFO: MongoDB is located at localhost:27017

2014-12-18 10:02:43 LOG <local> [nodecellar_31177.start] INFO: Starting nodecellar application on port 8080

2014-12-18 10:02:43 LOG <local> [nodecellar_31177.start] INFO: /tmp/d5c3b8d3-bc14-43b1-8408-4a5f7216641b/nodejs/nodejs-binaries/bin/node /tmp/d5c3b8d3-bc14-43b1-8408-4a5f7216641b/nodecellar/nodecellar-source/server.js

2014-12-18 10:02:43 LOG <local> [nodecellar_31177.start] INFO: Running Nodecellar liveness detection on port 8080

2014-12-18 10:02:43 LOG <local> [nodecellar_31177.start] INFO: [GET] http://localhost:8080 000

2014-12-18 10:02:43 LOG <local> [nodecellar_31177.start] INFO: Nodecellar has not started. waiting…

2014-12-18 10:02:45 LOG <local> [nodecellar_31177.start] INFO: [GET] http://localhost:8080 200

2014-12-18 10:02:45 LOG <local> [nodecellar_31177.start] INFO: Sucessfully started Nodecellar (22584)

2014-12-18 10:02:45 LOG <local> [nodecellar_31177.start] INFO: Execution done (return_code=0): /tmp/tmp1hAdu3-start-nodecellar-app.sh

2014-12-18 10:02:45 CFY <local> [nodecellar_31177.start] Task succeeded ‘script_runner.tasks.run’

2014-12-18 10:02:45 CFY <local> ‘install’ workflow execution succeeded

### Step 3: Uninstall

To uninstall the application we run the `uninstall` workflow:

`cfy local execute -w uninstall`

As of now 3.1 supports these leading aspects of the TOSCA spec – inputs, outputs, intrinsic functions, the relationships between nodes, versioning, workflows and more.  In our next versions we’ll be adding support for requirements and capabilities, nested templates, and policies that include: placement (affinity and anti-affinity), stack governance, etc.

At the end of the day, this TOSCA engine partnered with Cloudify is what gives you free hand to decide with which cloud you want to work, since Cloudify actually makes the pluggability of any cloud possible.  This ultimately enables you to run TOSCA templates on any cloud, and the same of course also goes for any configuration management or other tools you’re working with (which Cloudify also integrates with via plugins that are written in the same syntax).  Although TOSCA doesn’t refer to CM tools very much, the integration with Cloudify is what makes this possible.  If in the past you would need to script your own code, or use more tools that are more limited to the deployment aspects, rather than orchestration and such.

 



Leave a Reply