Taking Jenkins CI from Automation to Orchestration – A Continuous Integration A/B Testing Use Case


Jenkins CI | Jenkins Server | Continuous Integration | CI | CI Server | Build Automation | CD | Continuous Delivery

The benefits of continuous integration are clear. The need to enable deployment cycles that used to take six months to a year long to be dramatically shortened, allowing companies to deploy code numerous times per day, is undisputed. However, with this massive amount of daily deployments there has come a growing need to take the automation of this whole process to the next level while ensuring it remains extensible, scalable, and monitorable.
Continuous integration processes have become more complex with different use cases and scenarios that need to be supported – from building a robust CI pipeline through continuous delivery, to the need for A/B testing of deployments before committing to production, among others.

Cloudify – taking CI to CD with a simple blueprint. Give it a whirl.   Go

This is where Cloudify comes in. In this post, I’m going to dive into a more complex use case that enables you to orchestrate the entire CI process using Jenkins, while A/B testing and monitoring the whole process throughout, and last but not least, choosing the code you’d like to deploy to production.

A/B Testing with Jenkins & Cloudify

The architecture we chose for this demo is to build three environments, two for testing, and one for production environment – let’s call them A/B test 1, A/B test 2, and Production.
Here’s an image of the environment:
Jenkins CI | Jenkins Server | Continuous Integration | CI | CI Server | Build Automation | CD | Continuous Delivery
To start with, we needed to create a single Cloudify main or primary manager as the single point of control for these three environments, which would also be controlled by an additional Cloudify Manager for each environment, along with a Jenkins server.
(Note: Cloudify’s RND actually uses Travis and Circle CI for such procedures, but this same scenario can be run with any CI tool such as QuickBuild, TeamCity, etc.)
This was actually the first time we attempted to deploy a Cloudify Manager with a Cloudify Manager, but lo and behold, success.  Drinking our own merlot worked with some tweaking. This way, each Cloudify Manager inside the “main” manager can manage various deployments independent of any of the other managers.  Just as an aside, for the purpose of this use case, we had to slightly modify the Cloudify Manager blueprint in order to disable Elasticsearch’s multicast (which is a part of the Cloudify’s manager’s stack), in order to prevent the communication/synchronization between the Cloudify Managers which all reside in the same network or subnet – as the disparity of these environments was actually important in this case.
Here’s the modified manager blueprint’s snippet which disables multicast:

This deployment was run on Softlayer (IBM’s cloud provider), so we took a basic Softlayer blueprint and bootstrapped (launched) the main manager and then deployed the three different Cloudify environments by deploying three managers within the main Cloudify manager: one for A/B test 1, one for A/B test 2, and a third for production.  Next, we created a Jenkins blueprint, and configured it to start with three Jenkins jobs:
Two scheduled Jenkins jobs configured to sample a GitHub repository for the testing environments and one idle (ready to be invoked) Jenkins job for the production environment.
Here’s an image that shows the Jenkins role in this scenario:
Jenkins CI | Jenkins Server | Continuous Integration | CI | CI Server | Build Automation | CD | Continuous Delivery

How it Works

This scenario works by sampling a Github repository every 15 minutes to detect and deploy any changes made, and then these changes are pushed to the different environments (A/B 1st test and A/B 2nd test respectively).
When a change is detected in the repository, job #1 will deploy it on Cloudify manager #1, and job #2 will deploy on Cloudify manager #2.  In effect, for each code push to GitHub, two jobs will be performed by Jenkins simultaneously, each job will spawn its own independent deployment on its corresponding Cloudify manager.
The third environment behaves slightly differently intentionally, and does not actually run any commands in order to maintain the production job semi-automated. This is because since this is essentially intended to be an A/B testing scenario, the manual intervention enables you to compare the two different testing environments and choose the version you’d like to deploy to production. Once the admin decides on the preferred version, he hits a button to invoke the Cloudify workflow which that accesses Jenkins through the Cloudify Main Manager to move the selected version (from A/B 1st test or from A/B 2nd test) to production.
So while at this point, we are left with two testing environments with minor differences and the semi-automated production deployment, ultimately can be fully scripted, essentially closing the loop from continuous integration to continuous delivery.
In the Jenkins blueprint, we also created a custom workflow that allows users to run any Jenkins CLI command from the Cloudify CLI, so you don’t even have to (though you can)  access the Jenkins CLI to run commands.  This is a great example of how Cloudify’s workflow engine can be utilized to create custom workflow automation to remotely access your environments without the need to SSH or use any other agents (you can however still do so, if you prefer).
In this case we chose our application to be Drupal which is one of the most popular CMS platforms, so that every deployment on each one of the Cloudify managers is comprised of three tiers: Apache, MySQL and Memdached.
Here’s an image that shows how the Drupal application is deployed by Jenkins:Jenkins CI | Jenkins Server | Continuous Integration | CI | CI Server | Build Automation | CD | Continuous Delivery
This example also demonstrates how you can leverage all of the added Cloudify capabilities that come out of the box in order to extend Jenkins functionality and deliver robust CI through CD, including everything required for post-deployment, a critical element in continuous delivery processes.  So you can essentially monitor Jenkins – and not just the application itself, but the entire lifecycle of the build and deployment processes (e.g. success/failure), and define custom metrics based on your deployment through Cloudify’s policy engine.
This enables you to have an intelligent analysis of your system however you built it, and make decisions based on log files collected and other KPIs, receive alerts, and have your system auto-heal and auto-scale based on these policies.  What’s more, being infrastructure agnostic makes it possible to run this example on any VM, bare metal server, or even in Docker containers.
Another important element in continuous integration is cleanup – and not leaving dirty machines behind you.  This too can be scripted into your blueprint, enabling you to then undeploy each Jenkins job and the jobs’ results or actual deployments, and its corresponding Cloudify manager.  Some other cool tweaks and modifications which have been made enable versionizing, with a manual cleanup process, making it possible to versionize deployments, and then choose any version (to move to production) at any time.
There’s plenty you can do with Cloudify and Jenkins – I only just started hacking some nifty capabilities. You can watch the video in action below, and feel free to comment and share any other ideas you may have for getting the most out of your CI.
This whole implementation, including a Jenkins blueprint, is also available in Github.
Watch the video of this in action below:


Leave a Reply

Founded in 2012, Cloudify has robust financial backing from Intel Capital, VMware, BRM Group, Claridge and other leading strategic and financial investors. Cloudify has headquarters in Herzliya, Israel, and holds offices across the US and Europe.