Pillar Page Header

CI/CD Pipelines, Workflows, and Benefits – The Complete Guide

Background

Modern agile development practices are built on a philosophy of incremental change, frequent releases, automated testing, and a functioning code base. To iterate quickly, traditional manual methods have fallen to automation. The sooner flaws can be discovered, the smaller the resulting blast radius, and the higher the quality of code. Continuous integration (CI) is an automated process that frequently (typically tied to the introduction of newly finished code) runs automated test suites. By performing both low level (unit) tests and integration tests as code is submitted, the code base is protected (to the degree that the tests themselves are of high quality) against the introduction of errors, and the developer can take action while their memory is still fresh. Beyond low level testing, and module integration testing, CI and test automation can automatically trigger end-to-end testing to the extent that a usable testing environment is available on either enterprise data center hardware or public cloud resources.

The next step beyond having frequently (continuously) tested code, is to automate the delivery and deployment (CD) processes so users can get their hands on fixes and updates as quickly as possible. Delivery in this context refers to the creation of a deployable package or packages sufficient to be deployed into an operational environment. Deployment refers to actually deploying into production, or potentially a staging environment.

Once you put together continuous integration or testing (CI) with continuous delivery or deployment (CD), you have what is referred to as CI/CD automation. CI/CD, where the D = delivery, is an engineering automation practice that can truly be done continuously (on each significant change). This is by far the most common implementation of CI/CD. Few businesses can (or would) tolerate a fully automated release process to production, so the CI/CD, where D = deployment to production, is a rarity. For example, a Jenkins job doesn’t push iOS updates to your phone based on a code merge. However, deployment to a staging or testing environment where end-to-end operational, security, and user testing can be done (both manual and automated) is a common and valuable process. CI/CD is all about change management, and CI/CD change management is streamlined to deliver the highest quality customer value in the shortest amount of time.

CI/CD Pipelines

The CI/CD process is automated as a sequence of steps called a CI/CD pipeline, or more rarely a CI/CD workflow. It’s a pipeline rather than a tree or graph because it is linear. For example, a code merge triggers a build, followed by a unit test, followed by quality measurement, followed by security, and finally followed by delivery and possibly deployment (and further testing). Each step in the pipeline requires the success of its predecessor, the failure of which aborts the pipeline. It is common for individual projects to have their own pipelines which can be run in parallel to maximize the overall performance of the CI/CD process. The head of the CI/CD process is ideally an event in the source code control system, typically a merge/pull request. This is the natural origin of the automated CI/CD pipeline. The tail of a pipeline is either a deployable package or an actual deployed system. The pipeline consumes incremental code changes and emits well tested and deployable products. A CI/CD pipeline for Kubernetes will not fundamentally differ from a pipeline that deploys to virtual machines.

As we’ll see later, the CD part of CI/CD isn’t always conducted by the part that does the CI (building and testing). The CD process is where the newly built system approaches production.  As such, full automation may be a bridge too far. Products that specialize in the CD part of the process have emerged, which deal with release management and approvals (automatic and manual). These tools may have their own pipelines, representing the concerns of the operations side of DevOps, with special attention to end-to-end testing, load testing, live security testing, and the aforementioned release management and approvals process.

CI/CD Architecture

Three main pipeline architectures are common, progressing from simple and static, to complex and dynamic.

Static Linear

The simplest kind of pipeline, static and linear, is what has been discussed so far. Pipelines are built for a specific product and team needs and expressed as a fixed series of steps. The entire system is first built, tested, then released or deployed. This can be inefficient, but acceptable for smaller projects.

Dynamic Graph

In this kind of architecture, pipelines are assembled dynamically by the CI/CD system. Pipelines are decomposed into their individual steps, and dependencies are specified. The CI/CD system examines the dependencies and assembles the steps on demand. This can produce great efficiencies in larger projects that have many interrelated components. The CI/CD system, rather than blindly building a dependency, can cache the dependency so that it is only built once, regardless of the number of dependent components.

Pipeline Dependencies

Similar to the dynamic graph architecture, with pipeline dependency, the CI/CD system instead orders entire pipelines rather than individual steps. In addition, pipelines can be triggered by other pipelines, in a parent/child arrangement. For larger systems, this permits componentization and allows cooperation between teams that may develop their own pipelines so that they can be composed into larger build systems. This also encourages reuse and efficiency, like the dynamic graph approach.

Benefits of CI/CD Automation

CI/CD represents a long running revolution in software engineering practices. Traditional software engineering inherited from hardware engineering and architecture is a philosophy of extensive detailed specification. It’s followed by a potentially lengthy development phase and then by an integration and testing phase culminating in deployment.

In these early days, computing power was anemic relative to today, tools were primitive,  hardware expensive and scarce, and builds could take many hours. One of the biggest challenges of this waterfall methodology was that the system had to be perfectly visualized in advance (which it never could be). This typically resulted in a period of rework and schedule slips. The methodology also provided poor visibility into the progress of teams, which could claim progress on key milestones without having to provide concrete evidence.

The microprocessor revolution exponentially reduced the price and performance of computers and made frequent builds feasible. The nightly build was born. Along with this came the practice of iterative development, which effectively broke the waterfall into smaller waterfalls, which greatly improved visibility into the state of software projects. This greatly reduced delivery surprises and failures and laid the conceptual groundwork for agile development practices. With agile methodologies, the nightly build became the merge build, and automated testing was added to the build process. This created the first CI pipelines and constructed the ecosystem of tools we see today. This was nothing short of incredible performance gains for microprocessors, which also made virtualization feasible, birthing cloud computing, and enhancing the ability to do high quality testing against server platforms on demand.

New generations of tools continue to automate formerly manual or siloed practices, resulting in unprecedented transparency in the quality of systems under development. This Shift Left process meets its terminus with continuous deployment, which represents complete automation of the development process. If you could summarize the benefits of CI/CD it is visibility and velocity. The ability of teams to truly understand the limitations of their creations during the development process leads to much higher quality and enables a more complex development.  The velocity improvement makes feedback to teams quick, and the delivery of customer value is much more timely and of higher quality.

CI/CD and Testing

As emphasized above, the role of testing is fundamental to the success of CI and CD. Continuous integration, at a minimum, implies functional level testing (unit tests), whereas true integration requires higher level integration tests. This is an area where the requirements of continuous integration and continuous delivery overlap.

In order to perform integration testing, test environments must be made available. Ideally, these test environments are virtual and transient, limiting resource consumption and therefore cost.  This in turn implies a level of infrastructure automation during the integration test. Without performing such tests in realistic environments, the quality of delivered products will suffer.  

Similarly, continuous delivery is also commonly associated with testing. This includes end-to-end tests, capacity tests, security tests, chaos tests, failover and recovery tests, scalability, and others. It culminates in blue/green or canary deployments and progressive deployments are critical to avoid problems in production.

Here too, infrastructure and service automation is needed to construct and tear down realistic environments. Without this kind of test automation, the risks of fully automated deployments will remain too high for most.

To achieve the highest level of test automation, a flexible platform that integrates with popular ecosystem tools and constructs test environments on demand is a prerequisite. Cloudify is such a platform, integrating northbound with GitHub actions, Jenkins, CircleCI, GitLab CI/CD, and southbound with popular tools like Terraform, Ansible, CloudFormation, and ARM, as well as native integration with all popular cloud platforms and Kubernetes. This use of the Cloudify orchestrator as both an orchestrator and an integration platform glues together CI and CD, as well as allowing organizations to leverage existing investments while being open to the ever evolving ecosystem of CI/CD tooling.

GitOps or CD for Infrastructure as Code

Git has risen in recent years to become the preeminent source code repository tool. GitOps is a term that means operations automation driven by events in git repositories. It is not limited to a specific product (including git itself) but is exemplified by GitHub Actions and GitLab CI/CD. As previously mentioned under the topic of continuous integration, changes in the source code repository (e.g. merges) trigger actions (in that case, builds and automated testing). With GitOps, infrastructure definitions are stored as source code, and merges result in infrastructure changes. This is the essence of Infrastructure as Code or IaC.

The IaC approach connects source code control with dynamic (i.e. virtual or cloud) infrastructure. In this scheme, the source code control system is elevated to have direct control over operations and serves as the single source of truth for infrastructure. As such, it is mainly in the realm of CD or continuous deployment. Due to the necessity of infrastructure changes, and the difficulty of creating automated tests for infrastructure changes, a manual change review process is vital for the success of any GitOps process. This review process (of potential infrastructure code mergers) is also present in conventional, non-infrastructure code reviews. The potential consequences of infrastructure changes (and the resulting blast radius) make them highly critical for IaC. IaC lends itself best to declarative infrastructure templates rather than imperative code like shell scripts. Declaratively driven systems are easier to understand and automate away many details that might otherwise trip up code reviewers. While this does lower the cognitive burden on infrastructure developers, it doesn’t eliminate it, as they must be an expert in the behavior of the declarative systems they operate. This is typified by Kubernetes manifests, Terraform templates, and of course Cloudify blueprints.

Cloudify plays a key role in infrastructure and application orchestration for GitOps via its integration with GitHub Actions, Gitlab CI/CD, and CircleCI. In the case of GitHub, specific actions are available in the GitHub marketplace. In the case of CircleCI, a CircleCI Orb is available through their marketplace that can be used in pipelines. The GitLab CI integration is CLI oriented, using the Cloudify CLI. Each of these integrations provides remote manipulation (and even the instantiation of) a Cloudify server, in order to create and update deployments, and run arbitrary workflows. This includes custom ones that can perform automated testing.

The Cloudify Terraform integration reached a high level of sophistication in the latest release (6.4). Previously, Cloudify introduced the ability to accept and operate native Terraform modules. This capability is streamlined with the automatic detection of variables and outputs with a better secret store integration. Beyond native template ingestion, Cloudify has introduced resource tagging, which allows correlation between cloud resources and the provisioner (for both Cloudify provisioned and Terraform provisioned resources). InfraCost, a popular tool for estimating Terraform provisioned resources, is now built into Cloudify’s Terraform plugin. Finally, TFLint and TFSec have been added to the plugin, enabling automatic template linting and security scanning respectively.

CI/CD Tools Overview

A vibrant ecosystem of CI/CD pipeline tools has emerged to support CI/CD and CI/CD pipeline management. This section will look at some of the most important. CI/CD encompasses tools from code version management to production deployment and everything in between. This list excludes version control and limits itself to considering what happens after the merge or release.

Jenkins

Jenkins is a free, open source CI tool available on GitHub. It is by far the most mature solution of this bunch, with its roots in Sun Microsystems. One virtue of its long tenure is a vast selection of plugins (including one for Cloudify) along with a large number of built-in features. Unlike some tools reviewed here, Jenkins is an on-premises managed product, which can have security benefits. Because it’s been around so long, there are a lot of videos and tutorials available for learning, both free and otherwise.

Jenkins isn’t limited to a particular software version control system and is compatible with git, Subversion, and Mercurial. It’s also scalable, to support parallel development using a master/ slave architecture, and it supports defining CI/CD pipelines as code (textual) so build automation can be iterated like the rest of the code. Jenkins is written in Java, which makes it platform independent and easy to install, as well as having a great community.

Cloudify has an official Jenkins plugin that defines build steps for Cloudify operations such as blueprint upload, environment creation, and workflow execution, along with an ability to start a Cloudify manager dynamically for the duration of build execution. It allows the linking of several Cloudify workflow executions by mapping the outputs of one step to the inputs of another. Significantly, it provides the ability to process artifacts such as Ansible playbooks and Terraform modules automatically, by utilizing Cloudify’s corresponding plugins. The build steps accept artifacts such as ARM and CloudFormation templates without modification and run them in their native environments.

Spinnaker

Spinnaker is an open source, continuous deployment automation tool originating at Netflix. It provides pipeline definition and execution, supports VM and container based deployments, is compatible with the major public clouds, and is widely used.

Spinnaker automates releases by running pipelines in response to triggers from multiple sources including Jenkins, Travis CI, Docker, and others.  Spinnaker can facilitate GitOps but isn’t a platform focused on Git integration (although it supports Git triggers) as a primary use case. Its pipeline step vocabulary can be extended to add new kinds of steps. Spinnaker was open-sourced in 2015 and is mature with a full featured UI and REST API. It supports deployment strategies like blue/green and canary. Pipelines can be executed in parallel and can include manual approvals and execution windows to manage deployment schedules.

Spinnaker was open sourced by Netflix in 2015 with an Apache 2.0 license and a GitHub repository, and features community support. It has been used by thousands of teams across millions of deployments over the years. Spinnaker is used by Google Cloud Build (see below) to automate the deployment process.

CircleCI

CircleCI is a cloud and on-premises based CI server that is integrated with GitHub and Bitbucket, allowing teams to set up automated build, test, and deployment pipelines for their applications. CircleCI can be configured to deploy to various cloud platforms including AWS CodeDeploy, AWS EC2 Container Service (ECS), AWS S3, Google Kubernetes Engine (GKE), Microsoft Azure, and Heroku, as well as the ability to integrate with deployment environments via shell scripts. 

In its SaaS incarnation, CircleCI eliminates installation and server startup, and maintenance.  Software updates occur automatically and the scale is open ended. On the negative side, depending on your needs, having a hosted service may introduce unacceptable security risks.

The on-premises version (CircleCI Enterprise) deploys CircleCI in a HashiCorp Nomad cluster and provides an option for organizations that desire more control and security.

Cloudify provides a CircleCI Orb for implementing CI using a Cloudify Manager. The Orb defines a long list of Cloudify related functionality including uploading and executing blueprints, the ability to run arbitrary Cloudify CLI commands, run workflows, update deployments, and execute native templates from other orchestrators, including ARM, CloudFormation, Terraform, and Kubernetes. The Orb provides an easy to use, multi-cloud, infrastructure as code interface that can serve as the foundation for integration testing in realistic environments.

AWS CodeCommit

AWS CodeCommit is actually just a piece of the Amazon solution for CI/CD which also includes AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. Other AWS services involved include AWS CloudWatch, S3, and KMS. 

Somewhat like GitHub Actions, AWS CodeCommit (the git client compatible code repository) triggers build events received by CodePipeline. These events (provided by CloudWatch) flow to CodeBuild which builds and runs tests, storing the artifact(s) produced on S3. Then, assuming all is well, the artifact is deployed using CodeDeploy. CodePipeline can also use S3 and GitHub as sources.

AWS CodeBuild supports Linux and Windows (in certain regions). Deployments normally occur to AWS, but deploying to other environments is possible. It is very much focused on the AWS ecosystem, high quality integrations are built into the tool suite.

GitHub Actions CI/CD

GitHub Actions CI/CD is a service provided by GitHub built upon the GitHub Actions feature, which allows users to respond to events occurring in GitHub. GitHub allows you to run builds, tests, and deployments on their cloud servers, or on your own servers using the hosted runners feature.

GitHub Actions CI/CD supports the most popular OSs and languages. It has an open marketplace for integrations that vary in quality. It uses a YAML configuration file that defines jobs to be run when user selected event(s) within GitHub (a push for example) occur. It supports matrix builds that run multiple build configurations in parallel. Cloudify has released several actions for interfacing with a Cloudify manager.

GitHub Actions CI/CD is free for public repositories, and pay as you go for private repositories, but of course, only works with GitHub. GitHub also provides an on-premises server for those wanting more control. GitHub Enterprise is delivered as a virtual appliance that can run in high availability (failover) mode if desired.

Cloudify provides a long list of GitHub actions available through the GitHub marketplaceActions include uploading, deploying, and executing blueprint workflows, the ability to run arbitrary Cloudify CLI commands, update deployments, and execute native templates from other orchestrators, including ARM, CloudFormation, Terraform, and Kubernetes. The Cloudify GitHub actions are the building blocks of a true GitOps toolchain, which extends beyond infrastructure automation to multi-cloud service automation and comprehensive automated testing.

Octopus Deploy

Octopus Deploy is a dedicated CD server that addresses key concerns of this phase of CI/CD, including CI/CD release management and approvals. Octopus aims at smoothing out the release and deployment process of complex deployments and routine operations while providing auditing data and multi-cloud, Kubernetes, (and on-premises) support.

Octopus leverages a project based approach along with the idea of the runbook. Runbooks represent automation tasks as building blocks, ultimately implemented as scripts. As such the system goes against the grain of many other tools by not being declarative in nature. In Octopus, runbooks are granted permissions over infrastructure and are contained in projects. All runbook execution is audited. Runbook execution can be scheduled, and integration with ServiceNow provides ITSM integration for approvals. A mechanism for automated approvals is also provided. Release management eases the promotion of deployments across environments.

Octopus is a closed source and is available as a managed service or a local server for those desiring more control. It has no free tier but does have an inexpensive community version.

Argo CD

Argo CD is an open source, declarative GitOps continuous deployment tool focussed on Kubernetes as a target. It is a Cloud Native Computing Foundation (CNCF) incubating project (accepted in 2020). It provides support for teams wishing to implement Infrastructure as Code (IaC) in a Kubernetes environment.  Argo CD itself runs inside a Kubernetes cluster and is triggered by webhook events from a Git repository. Argo CD is the basis for the Red Hat Openshift GitOps product.

Argo CD strives to synchronize the desired state represented by the associated Git repository to the equivalent manifests in Kubernetes by generating them on demand using the desired templating tool. Automatic synchronization is optional via repository integration, and users can register hooks for related lifecycle events. It supports Kustomize and Helm for project templating. It can manage multiple clusters and uses RBAC and SSO integration. Argo Workflows provides a cloud native, Kubernetes based, workflow engine that can be used to automate simple CI/CD pipelines. Each step in the workflow is implemented by a container and can be expressed as a simple pipeline or as a graph as described earlier.

Argo CD is an Apache 2.0 licensed open source project and is community supported via its active GitHub repository. It’s also available with commercial support via Akuity and Red Hat via their Openshift Gitops product.

CrossPlane

CrossPlane gets its name from its mission statement: provide a framework for creating cloud native control planes. Like Argo CD, it is an open source, community driven CNCF incubating project (accepted in 2020). While the project itself is cloud native, it isn’t limited to controlling Kubernetes. As a framework, it aspires to enable users to construct their own control planes rather than starting from scratch. CrossPlane isn’t a tool for enabling GitOps, but rather a tool for easily creating Kubernetes based APIs (the control plane).

CrossPlane provides a mechanism to abstract low level Kubernetes controllers (providers) to provide tailored APIs for users or developers. A CrossPlane provider for a public cloud says GCP provides the ability to lifecycle manage resources in GCP, including security, networking, storage, etc. CrossPlane enables the generation of APIs (Kubernetes CRDs) that can represent a collection of such resources or even resources from multiple clouds. For example, a cloud developer environment might include the definition of a router, security group, subnet, and multiple VMs. A CrossPlane package can create a CRD that represents that environment simply while handling the details of configuration internally. From the Cloudify perspective, it is somewhat like a blueprint generator. CrossPlane isn’t necessarily a CI/CD tool, but can provide a simplifying layer for CD.”

CrossPlane is an Apache 2.0 licensed open source project and is a community supported GitHub repository. Upbound.io provides enterprise support for CrossPlane.

Flux CD

Flux CD is a GitOps continuous delivery (CD) product. Like others in this list, it is a Cloud Native Computing Foundation (CNCF) incubating project. Like Argo CD, it is focused on automating deployments to Kubernetes. It is open source, open governance, and is hosted on GitHub. Flux CD has a sister project, Flagger, that enables progressive deployment (PD) to Kubernetes, attempting to minimize the risk of “big bang” deployments by gradually introducing new code and gradually shifting workloads.

Flux CD supports continuous deployment of both applications and infrastructure (it doesn’t really distinguish between them formally), by being Kubernetes manifest agnostic and by providing a built-in mechanism for dependency management.  Flux CD supports the major Git repository providers including GitHub, GitLab, and Bitbucket, along with container registries and CI workflow providers. It supports Kustomize and Helm. It is designed to be extensible and customizable. Flux CD (and Flagger) are Apache 2.0 licensed, community supported open source projects.

CI/CD Automation in the Cloud

Besides the products mentioned above (and many more), the hyper-scalers all have CI/CD automation solutions. The all you can eat cloud options offer quicker startup and no integration effort for CI/CD automation, at the expense of having to accept their tooling choices and the resulting lack of customizability. All products are SaaS.

Google Cloud

Google Cloud provides three main products for continuous integration and deployment. Google Cloud Source Repositories are hosted Git repositories for managing code. Users can also use  GitHub and Bitbucket using supported integrations to Google Cloud Build.

Google Cloud Build uses its own proprietary repository-based CI definitions format for continuous integration, an optional Spinnaker integration for continuous deployment pipeline definition and execution, and exploits its vast cloud computing resources to accelerate builds. It can also use other public clouds or on-premises compute resources for builds. Cloud Build can target Google environments such as Google Kubernetes Engine, Google App Engine, the serverless Cloud Functions platform, and the Firebase mobile platform. It also can perform SLSA level 2 compliance scanning at build time. Google Artifact Registry can be used to store images and deployment packages.

Microsoft Azure

Microsoft offers Azure DevOps products for automating CI/CD. Azure DevOps combines both continuous integration and continuous delivery to test, build, and deploy projects. It starts with source code management by Git or Team Foundation Version Control in Azure Repos.  Azure DevOps also has an integration with GitHub. As will be familiar by now, pipelines defined in a pipeline management tool (in this case Azure Pipelines) react to events in Git (typically a merge) and executes build steps including automated tests. A slightly different twist for Azure is the inclusion of continuous deployment in the same Azure Pipelines tool. For building product storage, Azure provides Azure Artifacts. Azure Pipelines for continuous delivery support Azure, GCP, and AWS.

Azure DevOps is free for open source projects, and for a fee for commercial use. Azure DevOps doesn’t force users into its product suite, rather each product can be consumed independently and integrated with external tools.  

Amazon Web Services

Amazon Web Services (AWS) offers a suite of services for implementing CI/CD processes: AWS CodeCommit (hosted Git), AWS CodeBuild, and AWS CodePipeline. Similar to previous examples, the CI/CD development workflow starts at a source code repository (Git), events from which trigger a build and test system, in this case, AWS CodeBuild. CodeBuild is a managed continuous integration service that is triggered by changes in Git. CodeBuild is controlled via a buildspec.yaml file in the target repository. This file defines a continuous integration pipeline, with steps for building and testing the source code, the product of which is typically a deployment package such as a container image. Once the continuous integration step is complete, AWS CodePipeline is responsible for final release packaging and deployment to target environments. AWS CodePipeline also supports manual approvals.

As was the case for Azure DevOps, each product in the CI/CD process is usable independently of the others and has its own integrations to facilitate this. For example, AWS CodePipeline can integrate with GitHub, Jenkins, TeamCity, Snyk, and others.

 

Multiple Environment CI/CD

Multi-environment CI/CD refers to the targeting of multiple deployment environments by CI/CD pipelines. This may take the form of the very common dev, user acceptance testing (UAT), staging or production tuple. It can also refer to complex multi-cloud or hybrid-cloud targets, or to on-premises, and multiple public cloud platforms. 

Depending on the application and its dependencies on cloud services, the complexity may begin at the source code level. More likely, you’re needing to test and deploy applications across public clouds.

For Kubernetes, this is simplified at the application layer, since a significant benefit of using Kubernetes is the abstraction of hardware (or virtual hardware), as well as a consistent approach to container management. This pushes the IaC challenges into the cluster provider domain, whether it be EKS, GKE, or AKS (or Kubernetes on bare metal). For VM based applications, multi-cloud tooling that can configure different cloud provider APIs for compute, storage, and networking must be used.

Most continuous deployment solution providers include a cloud portability layer to attempt to assist with deployments. Usually, these are not really focused on an IaC GitOps scenario where there may be multiple targets. This is the typical entry point for tools such as Cloudify that specialize in multi-cloud infrastructure.

Cloudify’s Unique Position in CI/CD

Cloudify’s multi-cloud capability and robust integration with popular tools make it a natural choice for multi-environment CI/CD. Cloudify’s integration with ServiceNow allows integration with ITSM processes including manual approvals. Add to this Cloudify’s capabilities in service orchestration and environment/ service composition, and powerful use cases are available.

Beginning with continuous integration, Cloudify can of course construct realistic test environments either using its native capabilities or by delegating to infrastructure orchestrators like Terraform.  Once the environment is in place, Cloudify can provision arbitrary services or subsystems of interest onto the new environment and launch them. Finally, Cloudify can provision and execute integration tests in the environment (multi-subsystem tests, security tests, performance tests, etc.), capping off a complete CI workflow.

Once the integration is complete, many organizations will want to advance their product to the continuous delivery or deployment stage. This may involve evaluating system performance through a qualifying phase with end to end testing, then a user acceptance test phase, and finally production. As in the continuous integration phase, environment construction is needed, followed by software installation and configuration, and then various forms of testing (end to end, availability and failover, load testing, security testing, and so forth). As these tests near the production level, the introduction of ITSM workflows (ServiceNow) can be inserted to provide control and visibility. Cloudify’s ability to abstract environments (including non-virtual environments) means decoupling workloads from environments, making the promotion path for products on their way to production smoother.

The final step is the promotion to production. Most organizations will not be comfortable with development teams controlling deployments to production environments. This is where the ServiceNow integration with Cloudify shines. Using the ServiceNow integration, workflows can actually drive acceptance testing, provide manual approvals, and actually perform provisioning and upgrades to production under arbitrary conditions (i.e. deployment windows). With ServiceNow, Cloudify enables IT Operations management, where operational decisions such as scaling can be driven, and controlled, by ITSM workflows.

ServiceNow can be enabled to control a Cloudify Manager cluster from ITSM workflows via the Cloudify application available in the ServiceNow Store. Engineering teams can construct certified environments that can be consumed via the ServiceNow service catalog, or environments can be the output of a CI/CD pipeline as discussed previously. With the new resource awareness capabilities of Cloudify, the health of running environments can be assessed, and configuration drift detected, in order to drive appropriate remediation actions, such as the new enhanced heal and update Cloudify workflows.

Key Cloudify 6.4 Features and Enhancements that Impact CI/CD

Cloudify’s latest release: version 6.4, has much to offer for CI and especially CD. This section attempts to put it into perspective in a concise and practical way.

Infrastructure as Code Support

Infrastructure as code (IaC) is a practice for managing and provisioning infrastructure using configuration files, rather than manual processes. This means that the infrastructure that code runs on is described by code (usually declarative templates). These templates can be stored in version control like any other code, and realized based on an event in the source control system. Then changes to the templates in source control, with the proper plumbing, appear in the infrastructure. These changes equate to Day-2 operations, from an operational perspective, and are more challenging than initial environment construction.

Check Status

In the 6.4 release, Cloudify introduces the check_status workflow, along with a new node operation to support it (cloudify.interfaces.validation.check_status). Plugin implementations (including the existing Cloudify plugins) can implement the check_status operation and return a status that Cloudify associates with the node instance. For example, in the Cloudify AWS plugin, a server instance status corresponds to whether it is currently running or not. Knowing whether a server is healthy is a measure of orchestration intent vs reality, and syncing the two is key to determining whether a heal is needed, or an update is possible.

Smart Healing

The pre-6.4 heal algorithm healed unconditionally by being supplied a list of nodes or node instances. With the new check_status operation, the heal operation can be smart about healing without impacting pre-6.3 healing semantics. In Cloudify version 6.4, if a node instance has a health status, it will only be healed if the status is not OK. This permits an efficient remediation step in the infrastructure as code process, where infrastructure can only be updated when the initial state matches expectations.

CI/CD Impact: The addition of resource statusing and smart healing make Day-2 healing, including auto-healing, plausible without integration with an event processing system (e.g. Nagios). In fact, with this enhancement, the heal workflow can be run on a scheduled basis on all nodes, with only failing nodes restarted. This makes a full GitOps implementation more attractive than ever, reducing the load on operations teams.

Resource Configuration Awareness and Drift Detection

Besides resource health, understanding current resource configuration is critical for successful infrastructure automation. In Cloudify version 6.4, a new built-in workflow check_drift has been defined that dynamically compares the blueprint defined configuration versus the actual current configuration of managed resources. To support the new workflow, a new check_drift operation has been added to the cloudify.interfaces.lifecycle interface. Plugin authors (or ambitious blueprint authors), following the example of the Cloudify supplied plugins, can support drift detection by implementing this new operation. Implementation guidance can be found in the documentation.

The concept of drift detection is now integral to the concept of resource configuration updates as well. The built-in deployment update workflow will examine the result of drift detection, and skip drifted nodes (unless overridden). Note that the drift detection operation returns details about the drift, which are passed to the new update operation so it can make intelligent decisions.

Sophisticated Deployment Update Architectural Changes

To enhance the functionality of the traditional Cloudify deployment update capability, three new operations have been added to the standard cloudify.interfaces.lifecycle interface. Previously, deployment updates examined blueprint property changes to determine which blueprint nodes needed to be updated and did so by recreating (i.e. deleting the recreating) the updated nodes.  To support cases where non-destructive updates are needed and new operations were added.  

In much the same way as the traditional lifecycle operations are called in a certain order, but functionality is not proscribed, the update operations are likewise positioned. The operations are preupdate, update, postupdate, update_config, update_apply, and update_postapply,  called in that order. The addition of these new lifecycle operations gives the plugin author fine grained control over the meaning of update in the context of their resources.

CI/CD Impact: Besides creating harmony between the Cloudify Terraform plugin and the popular Terraform Day-2 apply algorithm, the ability to assess the runtime state of infrastructure and services along with a customizable deployment update capability, Cloudify can now support complex Day-2 scenarios without the need to write custom plugins. 

 

Southbound Integrations

Auto Tagging

The correlation between blueprint deployments and node instances in Cloudify and cloud resources has always been a challenge, and it is now addressed in version 6.4. Now blueprints are automatically labeled, and the various plugins were updated to tag resources where possible with the blueprint label. In supporting plugins, the resources are tagged with the creating deployment ID, instance ID, and timestamp.

Terraform

Cloudify’s Terraform plugin took another big step forward in version 6.4. Earlier this year, Cloudify introduced the ability to import a Terraform module and automatically wrap it in a blueprint. This functionality has been enhanced with improved detection of outputs and variables, local imports, and secret store integration. With this integration, it’s easy to create or adapt existing Terraform CI CD pipelines using Cloudify.

Integration with tools in the Terraform ecosystem includes Terratag (related to the previously discussed auto tagging feature), TFSec for module security scanning, TFlint for module syntax checking, and Infracost for cost estimation. These all support a properly conceived CI/CD pipeline, and DevOps or GitOps use cases.

Summary

The goal of CI/CD is the streamlined, repeatable delivery of high quality value to customers. It seeks to break down traditional barriers between developers and operators to enable frequent releases of features and bug fixes. The ability to deliver frequently with confidence requires a high level of automation, especially around testing from the unit through end-to-end.

Cloudify 6.4 is a breakthrough release for the platform, which connects CI with CD, infrastructure automation with service automation, ITSM with ITOM, and GitOps with automated operations in a flexible, unopinionated, open source package.

Back to top