Deploying Your App Onto AWS and OpenStack With Hybrid Cloud Orchestration Using a Single Cloudify TOSCA Blueprint
There is no limit to the variety of system layouts. The most basic would be a database and an application on a single VM. However, very few operations teams have all of their infrastructure in one place.
At Cloudify, we’ve been referring to real world scenarios with the term Hybrid Cloud, which actually is an umbrella term that includes non-cloud resources. It includes all forms of compute, storage, and network resourcing. An umbrella term for other umbrella terms.
One of my missions over the past few months has been to create some Cloudify demos that illustrate hybrid cloud design. I think the most common hybrid cloud scenario will involve some non-cloud resources, but for demos that’s not really applicable.
Download our “Cloud Management in the Enterprise” white paper! Go
Orchestrating a hybrid cloud is not easy. Every environment has its own terminology, its own set of functionalities, and limitations. Also, every software, version, build, etc has its own specific configuration constellation that must be tuned to the target deployment environment.
From a demo perspective there are two stages to hybrid cloud. The first is the configuration of the hybrid cloud environment. The second is deploying an application onto the hybrid cloud. Let’s see how it’s done.
Stage 1: Setup the Hybrid Cloud Environment
To simplify these demos, I stick to a hybrid cloud scenario of two environments – one AWS, and one Openstack – connected via an IPsec tunnel. (I am working Azure and vSphere as well.) The AWS formulation looks something like this:
Cloudify handles the full deployment and configuration of the VPC. It also handles the creation of the AWS VPC Connection feature and the IPsec tunnel. Both are quite simple in a Cloudify blueprint:
vpc_configuration: type: aws_node_type relationships: – type: cloudify.relationships.depends_on target: openstack_virtual_machine – type: cloudify.relationships.depends_on target: aws_customer_gateway |
Essentially, we want to make sure that before we request a VPC Connection from AWS, we have an AWS Customer Gateway and a virtual machine in Openstack. This can be any virtual appliance that features an IPsec VPN. In this blueprint that is a Fortinet 5.4 VM.
The relationship to the aws_customer_gateway node template points to code that requests the VPC Connection configuration from AWS. It looks something like this:
customer_gateway_configuration = xmltodict.parse(vpc_connection.customer_gateway_configuration) |
It then puts most of this information into runtime properties that will be used later by the IPsec tunnel configuration.
ipsec_tunnel1: type: cloudify.Fortinet.FortiGate.Config relationships: – type: cloudify.relationships.contained_in target: example_openstack_virtual_machine – type: cloudify.relationships.depends_on target: tunnel1_phase_2_interface |
After the creation of the VPC Connection, we add necessary configurations to the appliance and finish off with the creation of the IPsec tunnel.
The code here is basically just Fortigate CLI commands interpreted by the Fortinet plugin:
config_name: system interface config_id: ipsec_tunnel1 config: – vdom: root – type: tunnel – ip: – { get_attribute: [ vpn_configuration, ipsec_tunnel, ‘0’, ‘cgw_internal_ip’ ] } – ‘255.255.255.255’ – allowaccess: ping – interface: port1 – remote-ip: { get_attribute: [ vpn_configuration, ipsec_tunnel, ‘0’, ‘vgw_internal_ip’ ] } |
In order to deploy this blueprint, you need an AWS account, Openstack account, and Openstack Fortigate VM image with the license already installed.
Create an inputs file:
# # Openstack Inputs keystone_password: ‘myppass’ keystone_tenant_name: ” keystone_url: ” keystone_username: ” region: ” # Fortigate VM example_openstack_virtual_machine_image_id: ” example_openstack_virtual_machine_flavor_id: ” example_openstack_network_port_address_pairs: [ip_address: 0.0.0.0/0] example_openstack_private_network_port_address_pairs: [ip_address: 0.0.0.0/0] example_openstack_private_network_subnet_enable_dhcp: true # # AWS INPUTS aws_secret_access_key: ” aws_access_key_id: ” # Cloudify AMI (here using us-east-1) example_aws_virtual_machine_image_id: ami-165bdf01 |
Then execute the install in local mode:
user# cfy install multicloud-nfv-example/blueprint.yaml -i inputs.yaml –task-retries=30 –task-retry-interval=30 |
This will create the VPC, Openstack Networks, all routes, as well as Fortigate Configuration.
You can also use an existing infrastructure via the usual API in Cloudify for using external resources:
example_aws_vpc: type: cloudify.aws.nodes.VPC properties: aws_config: { get_input: aws_configuration } use_external_resource: { get_input: use_existing_example_aws_vpc } resource_id: { get_input: example_aws_vpc_id } cidr_block: { get_input: example_aws_vpc_cidr } |
Source: https://github.com/cloudify-examples/aws-azure-openstack-blueprint/blob/master/aws/network/vpc.yaml
Here is the log output of the IPsec configuration:
2016-11-23 13:30:28.282 LOG <multicloud-nfv-example> [ipsec_tunnel2_3fx5t6.create] INFO: Executing config system interface edit vpn-464f5e27-1 set vdom root set type tunnel set ip xxx.xxx.xxx.xxx 255.255.255.255 set allowaccess ping set interface port1 set remote-ip xxx.xxx.xxx.xxx end |
In the end, you have a fully functional multi-cloud environment. The deployment outputs give you the inputs to your demo application blueprint:
user# cfy deployments outputs -b multicloud-nfv-example { “example_aws_elastic_ip”: { “example_aws_elastic_ip”: “xxx.xxx.xxx.xxx” }, “example_aws_private_subnet”: { “example_aws_private_subnet”: “subnet-xxxxxxxx” }, “example_aws_public_subnet”: { “example_aws_public_subnet”: “subnet-xxxxxxxy” }, “example_aws_security_group”: { “example_aws_security_group”: “sg-xxxxxxxx” }, “example_aws_vpc”: { “example_aws_vpc”: “vpc-xxxxxxxx” }, “example_openstack_floating_ip”: { “example_openstack_floating_ip”: “xxx.xxx.xxx.xxx” }, “example_openstack_group”: { “example_openstack_group”: “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx” }, “example_openstack_network”: { “example_openstack_network”: “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx” }, “example_openstack_network_router”: { “example_openstack_network_router”: “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx” }, “example_openstack_network_subnet”: { “example_openstack_network_subnet”: “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxy” }, “example_openstack_private_network”: { “example_openstack_private_network”: “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxy” }, “example_openstack_private_network_subnet”: { “example_openstack_private_network_subnet”: “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxy” } } |
Now, you have everything that you need to deploy one of the multi-cloud demos.
Stage 2: Deploy Your App Onto the Hybrid Cloud Demo
The two hybrid demos cover two related use cases: Cloud Portability and Cloud Bursting.
Cloud Portability refers to the idea that you can take one blueprint, run it in one environment, say AWS, then run it in a different environment, like Openstack, without modifying the blueprint. Cloud Bursting means that you have an application running in one cloud, like AWS, and you want to horizontally scale out into another cloud, like Openstack. On the blueprint level, what this requires is that the application is designed to work in both cases. This presents a bit of a challenge for Cloudify and for TOSCA, because blueprints should be very specific. The node_template of a compute node, should not only say what cloud it should be created in, but which region and availability zone, and which network.
The solution in Cloudify is to imagine a separation between the application and infrastructure layers in the blueprint. On the application layer, we put our application. In the infrastructure layer, we put node_templates for all possible compute nodes. In between them, we put an intermediary compute node that essentially “decides” which cloud to deploy to.
Cloud Portability Example
For example, suppose we have a load balancer that has backends in both AWS and Openstack.
We will have a node template for the load balancer:
haproxy: type: nodecellar.nodes.MonitoredHAProxy properties: backend_app_port: { get_property: [ nodecellar, port ] } relationships: – type: cloudify.relationships.contained_in target: haproxy_frontend_host |
Notice that it is “contained in” haproxy_frontend_host”.
We might want to put the HAproxy Load Balancer in either AWS or Openstack in the blueprint, so we declare both possibilities:
haproxy_aws_virtual_machine: type: cloudify.aws.nodes.Instance capabilities: scalable: properties: default_instances: { get_input: haproxy_aws_instances } properties: aws_config: { get_input: aws_configuration } agent_config: install_method: none user: { get_input: aws_agent_username } key: { get_input: aws_agent_local_path_to_key_file } port: { get_input: aws_agent_port } use_external_resource: { get_input: use_existing_aws } resource_id: { get_input: haproxy_aws_virtual_machine } image_id: { get_input: haproxy_aws_virtual_machine_image_id } instance_type: { get_input: instance_type } parameters: relationships: – type: cloudify.aws.relationships.instance_contained_in_subnet target: example_aws_public_subnet |
haproxy_openstack_virtual_machine: type: cloudify.openstack.nodes.Server capabilities: scalable: properties: default_instances: { get_input: haproxy_openstack_instances } properties: openstack_config: { get_input: openstack_configuration } agent_config: install_method: none user: { get_input: openstack_agent_username } key: { get_input: openstack_agent_local_path_to_key_file } port: { get_input: openstack_agent_port } use_external_resource: { get_input: use_existing_os } resource_id: { get_input: haproxy_openstack_virtual_machine } server: image: { get_input: image_id } flavor: { get_input: example_openstack_virtual_machine_flavor_id } relationships: – type: cloudify.relationships.contained_in target: example_openstack_network – target: example_openstack_key type: cloudify.openstack.server_connected_to_keypair – target: haproxy_openstack_network_port type: cloudify.openstack.server_connected_to_port |
Each node uses an input haproxy_openstack_instances or haproxy_aws_instances to decide whether there will be any instances of this node deployed. These are a mutually exclusive “1” or “0”.
Next you have the “intermediary” node that contains the rule set, which determines which target VM to deploy the Load Balancer software on:
haproxy_frontend_host: type: cloudify.nodes.DeploymentPlan capabilities: scalable: properties: default_instances: 1 properties: deployment_plans: haproxy_aws_virtual_machine: capacity: { get_input: haproxy_aws_instances } haproxy_openstack_virtual_machine: capacity: { get_input: haproxy_openstack_instances } relationships: – type: cloudify.dp.relationships.plans target: haproxy_aws_virtual_machine – type: cloudify.dp.relationships.plans target: haproxy_openstack_virtual_machine |
This is accomplished by giving the intermediary node a set of rules for the nodes that it “manages”. In this case we have the AWS VM and the Openstack VM. We map the max number of node instances to a capacity, which takes the same inputs as the previously mentioned nodes in order to assign the target cloud.
Thus if haproxy_openstack_instances is 1, the only node instance will be Openstack.
(At present, this plugin is not handling bad inputs, so if a user provides invalid values, that’s their fault.)
Cloud Bursting Example
Bursting means scaling horizontally into a parallel cloud. For example, we want a maximum 10 instances in AWS and once that threshold is reached, we want to provision instances in Openstack. The Burst example uses the same “intermediary” node mechanism to handle horizontal scaling to a parallel cloud. We use the same mechanism to give constraints to the “intermediary node”. Suppose, just like the Load Balancer, we had configured an AWS and Openstack webserver backend node template. We want to scale into Openstack after there are 10 AWS instances.
nodejs_host: type: cloudify.nodes.DeploymentPlan capabilities: scalable: properties: default_instances: { get_input: nodejs_instances_total } properties: deployment_plans: nodejs_aws_virtual_machine: capacity: 10 nodejs_openstack_virtual_machine: constraints: nodejs_aws_virtual_machine: 10 relationships: – type: cloudify.dp.relationships.plans target: nodejs_openstack_virtual_machine – type: cloudify.dp.relationships.plans target: nodejs_aws_virtual_machine |
With this configuration, every scale action will scale the AWS target until there are 10 AWS instances. Then it will start creating Openstack instances.
For complete deployment instructions see the READMEs for these blueprints.
Himanshu Baheti says:
Hi,
I would like your help in one of the projects I’m working on.
I would like to know whether there’s any way to combine and manage orchestration in aws, azure, ibm and oracle into one.
If yes, how? If no, how can we create something to achieve that?
Ilan Adler says:
Hi himanshu – please contact us. Let us know how we can help