Your Guide to Infrastructure Automation & Hybrid Cloud Orchestration
The emergence of multiple public and private cloud platforms has provided a wealth of options for businesses seeking the scalability, agility, cost savings, and variety of features the cloud offers. The flip side of this is the complexity of mixing and matching different providers and processes.
There are many orchestrators/schedulers that address parts of the problem, both cloud specific (e.g. ARM, CFM, CDM, HEAT) and cloud neutral (e.g. Kubernetes, LXC, Mesos). Add to this modern tool stacks that automate CI/CD and ITSM processes such as Jenkins, Nexus, Ansible, etc… All of these tools have a role in hybrid cloud automation, but each represents a technology silo that ultimately presents potentially wasteful costly integration challenges. And these are ongoing costs, in organizations committed to exploiting new technologies and opportunities as they arise.
What Is A Hybrid Cloud?
Hybrid cloud refers to some operational combination of on premises, private cloud, and public clouds in a way that exploits the advantages of each. Hybrid cloud implies a workload integration or orchestration that joins multiple clouds for an operation purpose. Businesses may need the control provided by a classical self managed datacenter for security, performance, or other reasons, whether or not using cloud technology. At the same time, they can exploit the capabilities and scale of public clouds where advantageous.
Hybrid Cloud Solutions
Hybrid cloud solutions take advantage of multiple different features and characteristics of multiple cloud platforms and also sometimes classical on-prem resources. This includes security network connectivity for users to SaaS and PaaS providers that may form part of the solution.
Cloud-bursting is a solution typically targeted at the private/public hybrid that dynamically adds memory and compute capability from the public cloud when the private cloud is saturated. Cloud bursting can provide higher availability to systems that experience demand peaks, especially those that are predictable.
IOT and edge clouds also can be part of an overall hybrid cloud solution. This can look like the inverse of the cloud bursting scenario, where operational control is maintained on a public cloud, and small private edge clouds provide low latency services close to end users.
A hybrid cloud solution can satisfy data privacy requirements, which may mandate on premises or private cloud residency for sensitive data, but can still exploit hyperscale public cloud resources for anonymized or not as sensitive data.
Solutions that dynamically select underlying cloud capabilities based on cloud characteristics (security, locality, availability, services) for workload placement.
Hybrid Cloud Solution Considerations
Enterprises consider hybrid cloud solutions in order to exploit best of breed solutions, improve service availability, and take advantage of the scale of cloud infrastructure. The flexibility of workload dynamism, placement (private and public environments), and scale can be a game changer. All of these benefits don’t come without a cost, and deep consideration needs to be made regarding the aspects of hybrid cloud solutions:
- Fault tolerance: If your business doesn’t already host applications on cloud platforms, consideration needs to be given on how to adapt current workloads to the cloud. You may (probably will) need to modify current applications to tolerate the more volatile and dynamic nature of the cloud, including the ability to automatically recover from failure, and scale to take advantage of the flexible infrastructure model the cloud provides.
- Security: Hybrid cloud network security introduces security challenges, especially communication between clouds and credential/certificate management. This area needs careful consideration from the perspective of an attacker. In order to manage the complexity of a hybrid cloud network, you’ll want to develop standard security policies regarding firewalls, VPN tunneling, transport encryption, certificate management etc..
- Data security: If your business stores sensitive data regarding clients, employees, or business operations, you must consider the security profile of various cloud providers, and design systems in accordance with your risk tolerance and any regulations vs potential performance or other benefits.
What Is Multi-Cloud?
Multi-cloud refers to the utilization of multiple public clouds to take advantage of the attributes of each. A business might use one cloud for database or object storage, another platform as a server (PaaS), another for authentication, and so on. If a multiple cloud architecture includes a private cloud, it can be considered a hybrid cloud.
What Is A Public Cloud?
A public cloud generally refers to infrastructure (compute, storage, networking) as a service (IaaS) operated and maintained by a third party (e.g. AWS, Google) in datacenters accessible via APIs on the internet. Typically, SaaS offerings (running on the managed infrastructure) are made available as well (e.g. database, message queuing, etc..). Public clouds typically operate at extremely high scale.
What Is A Private Cloud?
A private cloud refers to cloud software running on internally maintained physical servers in corporate data centers or on premises. Examples include Openstack, AzureStack, and VSphere. The cloud software abstracts away physical hardware and uses virtualization to offer virtual servers (and network, storage, and services), via UI and API.
What Is Cloud Native?
Cloud native refers to an approach to building, deploying, and operating applications that exploits the advantages of the cloud computing operating model. It is not a set of fixed technologies, but rather an approach that emphasizes automation, frequent changes, scalability, and availability to match the dynamic, elastic, and distributed nature of modern cloud platforms. Cloud native systems are typically decomposed into small fault tolerant services that cooperate and are operated independently. It also refers to an ecosystem of technologies that have emerged to facilitate (and deal with the complexity) of this model like Kubernetes (scheduling), Prometheus (monitoring), and Isio (service mesh).
What Is CI/CD?
CI/CD is an acronym meaning “Continuous Integration”/”Continuous Delivery”. The “continuous” in each could be more realistically translated to “frequent”. Continuous Integration refers to the automated process of building and testing frequently, often after each successful merge or “push” of a change to a software project. Continuous integration systems will build the software in question, and run automated tests to detect problems. Continuous Delivery refers to the automated process of deploying changed software to production (or other environment). The goal of Continuous Delivery is to make the deployment process painless, frequent, and reliable.
Infrastructure Automation: What Is It?
Infrastructure Automation (IA) is the automated provisioning of virtual infrastructure and services both on premise and on cloud IaaS platforms. IA tools are used to construct reusable building blocks or even complete environments (including compute, networking, and storage), often for CI/CD processes. IA tools generally provide a declarative templating language that permits users to define a desired end state, as opposed to procedural instructions in underlying cloud APIs.
Ready to see Hybrid Cloud orchestration in action?
Download Cloudify now!
Infrastructure Automation Challenges of Multi-Cloud
Multi-cloud (or hybrid cloud) systems present challenges for cloud agnostic IA tools. Some challenges include:
The hybrid cloud presents many challenges to security, including credentials management, data migration, and a much larger attack surface area. Depending on the specific architecture, this also can result in multiple security regimes that have to be understood. Without automation that can span diverse technologies, the chances for a breach is much higher.
The management of resources across multiple platforms and APIs is a source of much complexity and friction to a hybrid cloud architecture. The ability to allocate, scale, and decommission resources with automation that minimizes the need for platform specific knowledge is crucial.
With hybrid cloud architectures, the need for network automation is fundamental. Beyond the resource management described above, an orchestration layer that is template driven and cloud agnostic can encode and simplify networking implementation including VPNs, load balancers, certificate management, and external service access.
Complex hybrid architectures present challenges for observability and diagnostics across multiple platforms. Sophisticated automation is needed to dynamically deploy, scale, and maintain the flow of events needed to maintain high availability.
Benefits Of Hybrid Cloud Orchestration
The complexity of software architectures is growing with the adoption of cloud native approaches, as is the operation of these architectures. The addition of multiple clouds and/or container orchestration tools merely multiplies the complexity and presents great challenges to both development and operations. Complexity equals friction when it comes to rapidly deploying services, and dynamically adaption of workloads to changing circumstances.
Orchestration is the automation of high level operational activities through a series of steps, either defined explicitly (via scripting/workflow), or implicitly (via a model) to achieve an outcome. In recent years, with the advent of cloud computing and the ability to allocate virtual machines, networking and storage across the globe, the role of orchestration has grown in importance. In essence, manual processes can not deal with the complexity and dynamism of modern cloud stacks, and high level automation is essential. The complexity of cloud stacks also introduce opportunities for security holes, further elevating the criticality of automation that can be polished and revised. All cloud platforms provide their own orchestration solutions to address these needs. AWS has CFM, Azure has ARM, GCP has CDM, Openstack has HEAT, and so forth.
If you’re moving to multi or hybrid cloud architectures, you’re going to need another layer of orchestration that can coordinate and command sub-orchestrators ( a pattern called orchestrator of orchestrators ). Hybrid cloud orchestration adds yet another layer of abstraction above and beyond that provided by cloud platforms (which already abstract servers, storage, and networking). The level of abstraction provided at the top level of orchestration can have a great impact on the complexity of the orchestration vs the ability to use specific desirable features of the underlying cloud. To completely abstract away cloud provider differences has the potential to enable extremely simple building block orchestrations, but at the cost of losing access to desirable platform features. Since one of the biggest rationales for selecting hybrid cloud architectures in the first place is the ability to use best of breed features from various cloud platforms, a pluggable platform is almost always more desirable than a high level abstraction that obscures cloud differences, and leads to settling for the lowest common denominator in features.
A typical hybrid cloud stack contains:
Physical: the servers, networking interconnections, storage devices, and supporting operational (e.g. datacenter) infrastructure.
Hypervisor: virtual machine creation, destruction, placement on hardware, networking and storage configuration. Examples include KVM, Hyper-V, ESXi, etc… Container engines like Docker also fit this layer.
Cloud: this is the familiar infrastructure as a service (IaaS) cloud layer that provides a self service experience, including APIs and user interfaces. Here also live the platform specific orchestrators such as HEAT, and CFM. In the case of containers, Kubernetes might be employed at this level, such as Amazon EKS and Google GKE.
Hybrid Cloud: finally, at this layer exists the tooling that abstracts and orchestrates the different cloud platforms and container engines.
The power of multi-cloud orchestration comes from multi-layer abstraction. In the layered model above, the infrastructure layer can hide the fact that it is providing resources from multiple clouds and domain layers can obscure platform specifics from the service layer. The conceptual separation of these layers frees the higher layer automation from complexity from below, and enables each layer to be managed separately. Because of this separation, changes to any layer can be done with minimal disruption and error, and the overall automation can be approached as a series of reusable building blocks.
Multi and Hybrid Cloud orchestration addresses the fundamental concerns of scalability, platform choice, and the flexibility to place workloads when and where they make the most sense from multiple perspectives. Combining clouds can allow an enterprise to optimise workloads based on:
Certain providers have lower costs overall, or lower costs for particular use cases. Also, some will provide cost advantages from using resources at certain times or related to overall cloud load (for example AWS spot instances).
Reliability, possibly interdependent on cost, is another dimension of concern that can vary according to workload. Non-high availability workloads (for example batch processing) can potentially be placed on less reliable platforms for a cost benefit.
The ability to place workloads in availability zones close to (sometimes not well served) customer centers can sometimes be necessary or desirable for performance or regulatory reasons.
The compute and I/O performance can vary by cloud provider and availability zone. Along with this can be the agility of the solution; the ability to spin up, tear down, or relocate workloads can be critical for certain classes of applications.
location The available locations of storage capacity can be critical for regulatory requirements. Different clouds will have different storage options, which can also intersect with topology and cost concerns.
Cloud platforms vary with support for security including authentication, authorization, logging, monitoring, APIs, security portals, and support. An enterprise can mix and match workloads, security features, and cost to optimize deployments.
In addition to virtual infrastructure, cloud platforms supply software as a service with capabilities such as object storage, relational database, machine learning, message queuing, serverless computing, and much more. The ability to leverage cloud services via automation is a fundamental value of hybrid cloud orchestration, whether connecting private or private clouds to services across platforms.
Ready to see Hybrid Cloud orchestration in action?
Download Cloudify now!
The following use cases exemplify the application of the Cloudify orchestrator to the hybrid cloud challenge.
Private Cloud As Code
In this use case, Cloudify is used to integrate an existing VMWare VSphere environment including NSX and VRO/VRA. Cloudify adds declarative models to simplify service and environment construction and tear down. Via Cloudify plugins, the ability to orchestrate Kubernetes, Terraform, and Ansible are made available. The integration with Jenkins permits the environment construction and configuration to be standardized as part of a CI/CD pipeline that delivers on the promise of a versionable, repeatable, code driven process.
Self Service Portal
This use case is built on the “Private Cloud as Code” use case above. The entire environment described above is presented in a simple portal/dashboard for easy use by non-devops users. The portal is a multi-tenant with role based authentication. The portal displays and provides for the operation of infrastructure and service deployments on the environment. It also displays aggregated logs and detailed runtime information about deployed infrastructure and services. It has an integration with ITSM services like ServiceNow. Finally, it is highly customizable, including the ability to develop custom views and functions.
Cost Optimized CI/CD
The typical test environments are relatively static in nature and tuned to the most demanding scenario that the test environment can support. Decoupling the environment creation and configuration from the test provides the ability to run tests optimized for different scenarios to achieve more realistic coverage. For example, creating ephemeral test environments using on demand cloud capacity can optimize cost management. Also, the ability to encode the environment creation as code, makes it a first class agile development product.
18x Saving on Complex Hybrid Cloud Automation Project
The ability to leverage sunk cost automation investments is a key benefit of Cloudify. In this case, existing Azure ARM templates were leveraged, and driven through the integration of ServiceNow and Cloudify. In addition, Cloudify functions as a VNFM managing branch connections into the system. The overall automation was simplified by using Cloudify’s declarative DSL to model infrastructure and services at multiple layers. Finally the integration with Jenkins drives CI/CD operations for agile system updates.
Return On Investment
Hybrid cloud orchestration yields large returns on investment, particularly if the orchestration platform is open and extensible. The savings flow from automation itself, and are amplified by the platforms and strategy. The savings can be broken down into categories.
Cost Policy Automation
Optimizing workload placement – using spot instances for example, or clouds with specific cost profiles based on operational conditions. Operational workloads managed on public clouds are typically cheaper than maintaining such capacity yourself (although this can vary).
Decommissioning of unused resources – scaling deployments based on actual or anticipated capacity needs. Automation can automatically downscale unused or underused resources immediately, maximizing cost savings.
Using open source DevOps and orchestration tools lowers private cloud licensing costs.
Reduce dependency on premium licenses for DevOps toolchains. Adding a top level orchestrator ( or Orchestrator of Orchestrators ) can effectively deliver functionality provided by the underlying orchestrator, saving licensing costs.
Reuse existing automation rather than requiring rewriting and re-debugging working automation. An open orchestrator can interface with any existing automation, eliminating the need to have a “big bang”, potentially catastrophic rewrite. Better to embrace and extend existing automation, then replace redundant or undesirable parts when it makes sense.
Multi-tenant self service, user extensible, portal. Everything is automated. The out of the box portal provides a “single pane of glass” to see the state of all system automation.
High availability and scalability due to clustered architecture. Manage almost unlimited resources with a dynamic, horizontally scalable, clustered architecture.
Unopinionated, open architecture means quickly adapting to changing toolchains and services. The rate of change in software tooling is accelerating. Planning for future change is essential to not get locked into soon to be obsolete technologies. This means not only an extensible architecture, but also an extensible language for describing automations.
A declarative approach permits built in workflow to be generic, eliminating the writing of many complex, error prone workflows. Declaratively driven systems describe the desired state of a system, and don’t specify the steps to achieve it. Imperative systems provide no human understandable model, but rather require the explicit instructions for every automation be custom tailored. This is both expensive and error prone. A declarative model driven approach that natively can be extended to new domains is a better way to define automation goals.
Cloudify is a declarative, extensible, standards based, open source orchestrator built to provide automation for heterogeneous infrastructure and services. It integrates with all major cloud platforms out of the box ( AWS, Azure, GCP, VMWare, Openstack, etc..) and is highly customizable and adaptable for future technologies. Likewise, it integrates with existing tool chains beyond infrastructure, such as (CI/CD, Ansible, Kubernetes, Terraform, etc …). Its extensible, DSL vocabulary makes it easy to tailor to arbitrary use cases and levels of complexity, delivering a declarative everything as code experience. Also, the DSL is composable (not monolithic ), allowing for a microservices approach to orchestration automation.
Features and Benefits
Effortless transition to public cloud and cloud-native architecture by automating existing infrastructure alongside cloud native resources.
Break automation silos using an ‘orchestrator of orchestrators’ approach which can house all automation platforms (Kubernetes, Ansible, AWS Cloud Formation, Azure ARM, Terraform, etc.) under a common management layer. And be ready to incorporate new platforms and technologies as they appear.
Cost optimization by having end-to-end modeling of the entire infrastructure that enables cost-saving policies, such as decommissioning of resources. The decoupling of model from implementation in Cloudfy makes systems both more understandable and maintainable.
Reduced deployment time by bringing infrastructure, networking and security into reusable and ‘templatized’ environments, allowing deployment of a variety of tasks in hours rather than weeks, for applications that run on similar configurations. The composability of Cloudify templates makes it possible to abstract elements to their essentials so they can be substituted for each other; a form of polymorphism at the system level.
A highly customizable catalog and portal framework, built to provide a self-service experience that is tenant-aware. Not only customizable by selecting prebuilt “widgets” for inclusion in the console, but customizable via user defined widgets that can supply any user experience desired.
A horizontally scalable architecture that can support almost unlimited numbers of orchestrated resources. Open source. Not a black box that hides poor implementation, security holes, bugs, and closed to user contributions.
Ready to see Hybrid Cloud orchestration in action?
Download Cloudify now!
Hybrid-Cloud – a cloud solution or application that spans multiple public and/or private clouds.
Multi-Cloud – a cloud solution or strategy that utilizes on multiple cloud platforms, but doesn’t combine them.
Public cloud – a (probably vast) collection of physical servers running virtual machines, that present the individual hardware as a single pool of compute, storage, and networking. Made available by the internet to the public via API and GUI. Examples include AWS, Azure, GCP, etc…
Private cloud – Like a public cloud, a collection of servers running virtual machines, presented as a single resource, but only made available internally to the enterprise. Examples include Openstack, Azure Stack, and VSphere.
Cloud Infrastructure – the “I” in “Iaas”, cloud infrastructure refers to virtual networking, storage, and virtual machines (or containers) made available on demand via software (UI and API).
Continuous Integration (CI) – the practice of performing automated testing frequently on a code base. Testing is often triggered automatically by events in a version control system ( e.g. developer contributions/merges).
Continuous Delivery (CD) – the practice automated delivery of software to staging and/or production. Generally paired with continuous integration, but triggered by significant milestones (releases & bug fixes).
Declarative – As contrasted with imperative (or procedural), a way of programming by means of specifying desired outcomes via a model, as opposed to supplying a sequence of commands.
Service Orchestration – The deployment, updating, operation (scaling/healing), and decommissioning of software onto virtual or physical servers.
Container – an isolated execution environment in an operating system, a lightweight alternative to a virtual machine.
Virtual machine – a software emulation of a physical server.
Kubernetes – An open source container management platform that abstracts underlying hardware (virtual or physical), networking, and storage.
Data Center – A facility that hosts computer hardware. A data center centralizes compute capacity, and so is typically subject to stringent physical security and disaster/failover protections. Data centers are the building blocks of the cloud, and a failover pair of them can roughly map to availability zones in public clouds.
Availability Zone – A “virtual” data center that provides failover protection for high availability. An availability zone will typically consist of geographically separated data centers in a particular region.