Scaling Rules with Cloudify

The Cloud

The cloud gives you the option to use as many resources as you want, essentially “endless” resources on demand, where you only pay for what you use.
In a world where everything is dynamic, and IT environments are constantly changing, this is becoming an ever-growing need.
In order to take advantage of the promise of the cloud, you need to be able to migrate to the cloud in a manner that isn’t too painful from a cost and resource perspective.  To this end, as you know, Cloudify enables you to deploy an application to any cloud with no code changes.
One of the cool features of Cloudify is that it can help you set up environments, while taking into consideration specific circumstances that can affect your application’s performance – and eventually your bottom line.
For example, what happens if your deployment or environment is under heavy loads or having peaks? How can you maintain your SLAs for your clients? What happens if you have too many machines? How can you downsize your deployments without adversely affecting your customers and users in the process?

Scaling Out

To this end, Cloudify comes with a built-in scaling mechanism, that can be used both manually and automatically.
Generally speaking, Cloudify can allocate as many machines as you’d like on demand.   All you need to do is edit your recipe and specify the number of, Tomcat instances for example, and then Cloudify, through its cloud driver, has the cloud allocate the VMs, and then deploys your application on them.
Let’s first take a look at the manual scaling option.

Manual Scaling

Manual scaling, as is inferred, requires human involvement of either a sysadmin or cloud admin.   The cloud/sys admin, monitors the environment, either through Cloudify (which also provides a built-in application monitoring mechanism),  based on metrics specified in the recipe (and displayed in the Cloudify web UI), or any other monitoring tool that can be integrated with Cloudify.
When the cloud admin sees that a metric threshold has been breached, for example, that there are heavy loads or that an environment is too busy,  they can then choose to add additional VMs or instances of any service needed without affecting the existing environments.
Through manual scaling, Cloudify can also kill machines in the same manner, without end users seeing any downtime at all.
Manual scaling is easily performed through the Cloudify CLI, by invoking the set-instances command and specifying the number of instances you want.
For example, if you want four MySQL instances, all you need to do is to invoke the following set-instances command : set-instances mysql 4 and within a few minutes, the instances specified will be up and running.

Automatic Scaling (Auto-Scaling)

As with most IT processes, many times you’ll want to automate these processes, and not have to have someone monitoring the system manually.
That’s one of the great things about software, and more specifically cloud, and even more specifically Cloudify – that automates not only the installation, but also the configuration, and the runtime behavior.
With its automatic scaling rules mechanism, it’s possible to define, in advance, the metrics you’d like to monitor. These can be Cloudify’s built-in metrics, which can be used for the scaling rules mechanism, and displayed in the UI.
In addition to these, you can also define custom metrics through Groovy code, Java code, or even shell scripts, and any other method for retrieving data.   In addition to being able to view these in the UI, these metrics can also serve as the trigger for the auto-scaling mechanism.
Here are some examples of metrics: CPU usage/utilization, memory usage, number of busy or active threads – and really any other metric needed.
These metrics are sampled every predefined time period. This means you can define the time frame you’d like for each polling cycle, for example, you can specify three seconds or ten seconds, and the metric will be compared against thresholds defined in advance.
These can be maximum, or minimum thresholds as well. The system then decides what to do with this metric.
For example, if this threshold is breached, whether it is too high or too low, the automatic scaling rules mechanism can decide to launch new machines or kill machines,  as predefined in the rule, or to only do this after the average/max/min of all your instances demonstrates a breach of the threshold.
Well… What about the time it takes to launch new machines or kill machines, won’t the rules continue to be triggered in the interim?
It’s funny you should ask that, as that is indeed something that the system takes into account. It’s called the cool down period and you can and should predefine it.
For example, if the threshold is breached and the scaling mechanism registers that it needs to launch two additional machines, this, of course, can take time depending on the cloud, the zone, and the load. In that time you, obviously, don’t want Cloudify to go into an endless loop of adding redundant machines.
So in this case, you need to set in advance a cool down period which tells Cloudify the amount of time you’d like to disable the scaling rules mechanism once the scaling mechanism has been triggered.   For example, you can define a five minute cool down period after machines have been launched or killed, and the auto-scaling mechanism disables itself for that specified time period.
Once the machines are added, the scaling rules are automatically re-enabled. Pretty cool.

The current Cloudify version enables you to specify scaling rules per service, for example, take Tomcat and MySQL. – If you’re in the Tomcat service, you can define a metric that relates only to the Tomcat service.

Likewise, if you’re in the MySQL service, you’re only allowed to add and remove machines on demand to and from the MySQL service respectively.

Looking Ahead

In future versions, Cloudify will have an enhanced scaling rules mechanism which will be able to check metrics from other services as well,  and act upon them, and the available actions will be any feasible action in the system, not just adding and removing machines.
For example, if you’re in the Tomcat service, you will be able to update the MySQL database, or even add or remove machines from other clusters.
The ability to use the manual scaling option or automatic scaling, provides maximum control over your system.
With manual scaling you have a failsafe when rules are not defined in advance, and there’s an unexpected spike in the system.
Auto-scaling lets you define rules in advance, according to known trends, for example eCommerce sites on Black Friday, or other sites such as the Oscars site on their big night, the NBA Finals, and more.
In this way, even if you find yourself unprepared with the correct rules in place, you can take manual measures to scale your system on demand, in real time.
Here’s a video that shows how to invoke scaling rules from the Cloudify Web UI :

Learn more about the cloud automation and scaling rules in our wiki.

comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top