Multi-Cloud & Stack Lab Scenario For AWS

Multi-Cloud & Stack Lab Scenario For AWS

You are here:
< Back


This end-to-end solution package uses pre-configured applications to demonstrate Cloudify functionality, to provide a rapid way to test drive the Cloudify platform with a typical IT environment and experience first-hand how to apply automation to your different clouds and applications. If you have not received a lab link, click the button below to get started.

Get Your Lab Environment

Launch Lab

The solution package tells the story of an organization with a private OpenStack cloud and a public AWS cloud, as well as a container cluster deployed with Kubernetes.

We have built a lab scenario to demo how to leverage orchestration in a typical enterprise environment looking to apply central orchestration across more than one cloud, and even test drive containers.

You can also run the lab with OpenStack and Azure!    TRY NOW >>

The end to end lab scenario will result in a deployment that looks like this:

The lab environment enables you to test drive Cloudify at your own pace, with three different modules:

  1. Beginner Scenario ~10 minutes to complete

  2. Intermediate Scenario ~30 minutes to complete

  3. Advanced Scenario ~25 minutes to complete

This is a cumulative exercise all the way through the advanced scenario – so each phase completion time includes the previous scenario, aside from the Kubernetes cluster that should be launched at the beginning if you intend to go through the full scenario, all of the steps should be performed in the order of the documentation.
 The full end-to-end lab takes 65 minutes to complete.

Below you will find a detailed explanation for how to launch the lab, how long each phase takes and documentation for how to use it, and ultimately how you can then leverage the lab environment to demo Cloudify functionality in the remaining ~hour of usage (full advanced scenario).


Many typical large-scale organizations would like to achieve multi-layer automation across their entire stack.  A typical architecture for a large-scale organization’s stack looks like this:


The end-to-end advanced lab scenario will enable you to create a preconfigured stack with all of the blueprints and plugins to run the scenario that focuses on the parts of the stack highlighted above.
This end to end example is an advanced scenario targeted at users that would like to learn how to leverage Cloudify step by step, starting with a single cloud scenario with a simple NodeJS application, through a multi-cloud scenario with service chaining, and finally a multi-stack scenario with Kubernetes.
The example can also be done in parts to incrementally learn how to work with Cloudify, and is divided into three separate parts for beginner, intermediate, and advanced users – where each phase is a prerequisite to continue to the next phase.


  • If you intend to complete the end-to-end advanced lab scenario, it is suggested that the first step should be to launch the Kubernetes cluster, as this can take 20-30 minutes. GO HERE.

  • Please note that this lab scenario requires you to have your own AWS environment, and will prompt you for credentials during setup, the remainder will be provided by the lab itself.

The story in the end-to-end solution package covers these Cloudify features:

  • Multiple management networks (Multi-cloud)

  • Traditional VMs, containers, and Kubernetes orchestration (Hybrid-cloud & stack)

  • Multi-tenancy

  • Deployment proxy (Deployments as a Service)

  • Global and tenant secrets

  • Scaling

 This will enable you to test drive some of the following functionality that Cloudify provides:

  • Coordinated multi-cloud deployment

  • Coordinated hybrid cloud deployment

  • Multi-blueprint composition

  • Kubernetes cluster deployment

  • Templatized Kubernetes resource deployment

  • Openstack compute and network orchestration

  • AWS compute and network orchestration

  • Cross-template/cross-cloud scaling

  • MariaDB, HAProxy, Drupal, and WordPress orchestration

  • Service Chaining



Lab Creation (~15 Minutes)

  1. Go to

  2. Click the link “Test Drive“.

  3. On the next page, click the link “Launch Lab”.

  4. Fill out the form, click the link “Create Lab & Get Unique URL”.

  5. Go to your email and open the email, “Your Cloudify Lab Environment Has Been Created”.

  6. Click the URL linked after “Your lab is available at the following URL:…”.

  7. Scroll down and click on the “Start Lab” button.

  8. This will initialize the lab launching process.  You will need to wait several minutes for the lab to finish deploying. The progress meter should read 100% for you to be able to begin.

  9. When the lab is finally ready, you can get started with the beginner scenario.



Once this is completed you will have the following available in your environment:

  • A preconfigured running OpenStack private cloud

  • A running Cloudify manager with preconfigured blueprints and plugins required to launch the entire advanced scenario



Beginner Scenario (~10 Minutes):


  1. In the left menu navigate to Blueprint Catalog.

  1. Find the “nodecellar-auto-scale-auto-heal-blueprint” blueprint and click Upload. You are now uploading a blueprint to the Cloudify Manager.


  1. In the dialog box that will open, provide the blueprint with a name you will recognize, for example nodecellar, and select the openstack.yaml blueprint from the drop down menu below. Click Upload.


  1. Once you have completed this step, navigate to Local Blueprints in the left menu, where you should now find the blueprint you just created. Click on the Deploy button on the bottom right.



  1. In the dialog box that will open, give the deployment a name you will recognize. Note: You do not need to change any of the default input values.



  1. You are now creating a deployment. Once you have completed this step, you can now navigate to Deployments in the left menu.


  1. Find the deployment name that you just created. On the right side of the screen you will see a small hamburger menu associated with this deployment. Click on it and select Install from the menu. This will install the nodecellar application. Installation should take about 4 – 5 minutes.


You now have an OpenStack private cloud running a simple NodeJS application.

What you can do next:

  1. Under the Deployments menu you can scroll down to Deployment Outputs and find the IP of the running NodeJS application.

  2. Open a new browser tab, paste in the application’s IP and see it running.

  3. Test drive Cloudify auto-scale, auto-heal features, resource isolation and more.

  4. Try to upload your own blueprint to the Cloudify manager and see it running.

Continuing to the next scenario:

If you intend to continue to the next step you will need to uninstall the blueprint you have just deployed.

  1. Navigate to Deployments and click on the nodecellar deployment.

  2. At the top you will find a menu button called Execute workflow, select the Uninstall workflow and then in the dialog box that will open click on the Execute button.

  • Your nodecellar blueprint will uninstall and you will be ready to continue to the next lab scenario.

Intermediate Scenario (~30 Minutes)  – Two Parts

This scenario will demonstrate how to leverage your existing OpenStack environment, install a global database for multiple applications, a load balancer in front of it, and build a service chain, and then create a multi-cloud deployment.

Part 1 – OpenStack Only

  1. Navigate to Local Blueprints in the left menu. Find the db blueprint. Click on the Deploy button. When the dialog box opens, name the deployment db, and click Submit.

  2. Next navigate to Deployments and select the Install workflow from the right Execute workflow menu. This will install a MariaDB/Galera Database cluster on OpenStack.


Create Deployments: Form 

  1. Wait for the install workflow to complete and under the deployments on the same page select the db deployment, and scroll and find the Deployment Outputs. Copy the single IP in the cluster_addresses output.


Deployments Outputs: Database

  1. Go to Local Blueprints in the side menu. Find the lb blueprint. Click Deploy. Name the deployment lb, provide the IP you just copied from the Deployment Outputs as the input value for application_ip input, and click Submit. Navigate to Deployments and select the Install workflow from the Execute workflow menu. This will install a Load Balancer on OpenStack for the apps to use the MariaDB/Galera cluster as a backend.


Create Deployments: Form 

  1. Go to Local Blueprints. Find the drupal blueprint. Click Deploy. Name the deployment drupal.


 You will need to associate the db and lb you have just deployed with the Drupal blueprint, enter these in the db_deployment and lb_deployment fields, and click Submit. Navigate to Deployments and select the Install workflow from the Execute workflow menu.  This will install a Drupal app on Openstack.

You now have MariaDB, HAProxy, and Drupal running on OpenStack.

What you can do next:

  1. Navigate to Deployments and scroll down to Deployment Outputs to find the IP of the Drupal application you have just deployed.  You will then be able to see your Drupal running once you paste the IP address into a new browser.

  2. Upload additional blueprints leveraging the database, and load balancer you have created, and play with Cloudify policies around managing load, and global resources.

 Continuing to the next scenario:

To continue to the next step, for the multi-cloud scenario, you will need to uninstall Drupal, the load balancer, and the database, in this order. 

    1. Similar to uninstalling the nodecellar application, navigate to Deployments click on “drupal”.
    2. At the top you will find a menu button called Execute workflow, select the Uninstall workflow and then in the dialog box that will open click on the Execute button.

Note: This can also be performed from the right hamburger menu associated with each deployment.

  • Repeat these steps for both the db and lb blueprints, and you will be ready to continue to the next lab scenario.

Part 2 – Multi-Cloud Scenario

After you complete Part 1, you will then be able to replicate this scenario and deploy it to AWS, to experience a Cloudify multi-cloud deployment.  To do so, you will first need to configure your AWS account details. (You can also use Azure or GCP credentials if you prefer).


Configuring AWS Account Details

This demo orchestrates a multi-cloud application, the lab is running on an Openstack private cloud provided by Cloudify. In order to test drive the multi-cloud functionality, you will need to configure your own AWS account details to complete this next part. 

  1. Navigate to System Resources in the left menu.


Left Navigation Menu 

  1. Scroll down to the Secret Store Management section.

 Add Secrets Panel

  1. Here you will be able to create the following AWS secrets:

    • aws_secret_access_key

    • aws_access_key_id

    • ec2_region_name

    • ec2_region_endpoint

    • availability_zone

  1. Click on the Create button to enter your cloud secrets.  You will need to enter these values one at a time.


Add Secrets Form

  1. Once completed, you can navigate to Local Blueprints, where you will find the aws-example-network.

  1. Similar to the other blueprints, you will need to click on Deploy, name the deployment in the dialog box that will open, e.g. aws-example-network, and click Submit.

  2. Navigate to Deployments and execute the Install workflow in the right menu of the aws-example-network deployment.

  3. You can now repeat the steps for deploying MariaDB and HAProxy on AWS by selecting the db-aws and lb-aws blueprints respectively.

  • You can now install the Drupal blueprint on OpenStack leveraging the remote database and load balancer.

  • This time when deploying the Drupal blueprint, enter the inputs in the db_deployment and lb_deployment fields as db-aws and lb-aws.


You now have a local application running on OpenStack with a service chain to a remote load balancer and database running in AWS.

  1. Now you can test drive adding applications to your OpenStack & AWS clouds leveraging your global database, and create scaling groups, as well

  2. Try to create isolated resources and tenants

Now you can continue to the final step of deploying an application in a Kubernetes cluster running on OpenStack.

 (Please note, if this step was not performed in the beginning, it will take approximately 20 minutes to deploy the Kubernetes cluster).

Advanced Scenario ~ 25 Minutes

This scenario in the demo requires a Kubernetes Cluster. This is provided as part of your lab, you  will just need to install it.  Once you complete this scenario your full end-to-end deployment will look like this:


 Navigate to Local Blueprints in the left menu, and find k8s-e2e, and click Deploy.

  1. Name your deployment in the dialog box that will open, e.g. kubernetes, and click Submit.

  2. Navigate to Deployments in the left menu, select the kubernetes deployment you just created.

  3. Under the Execute workflow menu button, select the Install workflow and click Execute in the dialog box that will open.

Upload Blueprint

  • This will launch a Kubernetes cluster on OpenStack.

  • Please Note:

    • If you are performing this action as the first step before you get started, you can now go back to the Beginner or Intermediate Scenarios to continue your test drive with Cloudify.

    • This is what the deployment will look like once you have completed the install process as a first step.



    • If you are performing this task in the order of scenarios, this step will take approximately 15-20 minutes to install, so you can feel free to go get some coffee and resume from the next step.

  • Go to Local Blueprints. Find db-lb-app. Click Deploy. Change the db_blueprint input to db-aws. Change the lb_blueprint input to lb-aws, click Submit. This will install a WordPress App on Openstack.

Next Steps

Now you can scale and update your application to experience Cloudify’s scaling capabilities. To do so:

  1. Wait for the install workflow to complete. Click on the deployment. Click on the Execute workflow menu button.

  2. Select “Scale and Update“. Change the lb_deployment_id to lb-aws and the db_deployment_id parameter to db-aws. Change the timeout to 1000. Click Execute. This will scale the MariaDB Galera cluster and add a backend node to the HAProxy Deployment.

Next Steps

After you have finished with these steps, explore some of the other examples that are designed for this lab, such as:

If you have credentials for GCP or Azure, try installing the DB-LB-APP, by starting with these network setups:

Feel free to reach out to our user group if you have any questions or need help getting started, and we’d be happy to hear feedback about your experience with running this lab.


    Back to top