Peter Chadwick Senior Product Manager – Cloud Infrastructure at SUSE wrote an interesting article arguing Why OpenStack Will Rule the Enterprise. Chadwick refers to three main points in his arguments:
- Variety – OpenStack is the only cloud platform that supports every major hypervisor.
- Familiarity – OpenStack follows an open source model similar to Linux. Most enterprises are already comfortable with that model and in that context OpenStack would be a natural evolution.
- Governance – The OpenStack Foundation was created to ensure that OpenStack evolves to meet a wide range of requirements that are not controlled by a single vendor.
Current Challenges for Enterprise Adoption of OpenStack
While there is a growing acceptance that this is the general direction, others seem to point out some of the gaps that still exist to enable enterprise adoption of OpenStack. Nancy Gohring mentions in an article in ITWorld Immaturity holding back OpenStack deployments that the main challenges are the lack of maturity and the fact that OpenStack is used mostly for testing and development and is not ready for true multi-tenant production deployments. She also points out the fact that many of the OpenStack distributions today are offered by relatively small companies, creating another barrier for adoption. Bernard Golden more specifically highlights the upgrading and deploying of OpenStack as the main challenges for enterprise adoption in his article How OpenStack Should Prepare Itself for the Enterprise.
A Possible Strategy for Adopting OpenStack in an Enterprise World Today
All the cards indicate that despite some of the maturity and growing pains of OpenStack as a framework and community, it is heading in the right direction. But as we often say in those cases, “there is no fast forward for maturity.” We need to expect that over the course of the coming years the reality is going to be fairly dynamic with lots of new developments coming out with every release and with many providers and players promoting their different OpenStack offerings. To compete with one another, these providers will also try to add their own value-added features as differentiators. So our proposed strategy has the following foundations:
- Design to cope with a continuously changing and evolving environment: We need to design our OpenStack environment in such away that we could adopt new versions of OpenStack and new ecosystem frameworks easily.
- Design for portability across different OpenStack providers and versions: During a visit to one of our investment banking customers, he used an interesting analogy to describe his strategy for implementing an OpenStack-based private cloud on his environment:
“In a traditional data-center world we used to work at a hardware device and hypervisor level. We used different resources from Dell, HP etc. We’re thinking of doing the same thing in the cloud world, only that rather than working at the device and hypervisor level, we can work at the IaaS level with the hardware, storage and network pre-integrated. OpenStack allows us to standardize those providers and have a mix of them installed in our data-center.”
In order to keep this level of flexibility, you need to ensure portability between the different OpenStack providers. This is especially true in a world where each provider tries to sell you the entire world wrapped up into one with OpenStack on top. If you design your data center to work with various providers in place, you’re more likely to remain portable and avoid a complete lock-in.
- Use a high availability approach to enable simple upgrades without downtime: The simplest approach to upgrade any given system is to rip and replace it with a new one. Having said that, in most enterprises, taking a system down for the purpose of upgrades or maintenance is often not acceptable, for obvious reasons. If we design our system for continuous high availability, we could take down each of the cloud zones one at a time and still keep our application running. In this way, we could have a simpler upgrade approach; rather than trying to upgrade individual units of our infrastructure, we can can rip and replace an entire unit one at a time to continuously upgrade our system.
- Take a baby steps approach: As with any complex project, it would be more practical to implement OpenStack in small bits rather than taking a *Big Bang* approach. Here are few examples of what those steps could look like:
- Start with automation first and future-proof your application to run on OpenStack at a later stage.
- Run OpenStack for your development and testing environment.
- Use OpenStack as your DR environment and gradually move your primary site to OpenStack as well.
Implementing the Strategy by Abstracting the Application from the Infrastructure
The points outlined above may seem obvious, but it is not yet clear how they can be implemented.
One of the ways in which we could implement portability, future-proof our application and allow for smooth upgrades is through the use of devops automation frameworks, such as Chef, Puppet or Cloudify, which itself integrates natively with Chef and Puppet.
Such frameworks allow us to create an abstraction layer between our OpenStack infrastructure and our application to easily migrate our application across different OpenStack providers, versions etc.
Using Cloudify to Abstract your Deployment Between Different Private and Public OpenStack Providers
One of the challenges with many of the abstraction frameworks is the fact that they often rely on a least common denominator API approach. An API abstraction approach limits the level of unique features and capabilities that can be provided by each specific vendor.
In the case of Cloudify, we use a plug-in approach that is referred to as the Cloud Driver, which provides access to the IaaS layer, while a recipe model separately defines the application plan or blue-print that needs to be deployed.
With this approach, there is a basic contract/interface that each driver must implement to allocate compute, storage or network resources. Each driver can be given a set of specific properties and arguments with which the user can choose a specific network setup or compute API of a specific driver. This approach provides a higher degree of flexibility than an API-based abstraction and we can, therefore, apply it to address a variety of scenarios as outlined below:
- Migrating between different OpenStack providers: The application deployment plan (recipe) is kept abstracted from this information. This way we can deploy our application on different providers using the same frameworks, but at the same time our application can still run better on a specific provider and utilize its specific set of features.
- Migrating from an existing (non-OpenStack) data center to OpenStack: This abstraction allows us to work on top of our existing data center which may not yet support OpenStack. We can use this model to manage our application in the existing data center by pointing to a BYON (Bring-Your-Own-Node) CloudDrive, which gets a list or range of IP addresses as an input and manages the workload within that predefined pool.
- Migrating between different OpenStack versions: We can also use this model to migrate workloads from one version of OpenStack to the other. In this case, each provider’s availability zone will represent a a specific OpenStack version. We can upgrade our underlying OpenStack zone just by removing it from the system. Cloudify in turn will automatically provision the application that was running on this provider in another zone which will be set to run the new version of our application.
Making Cloudify Native to OpenStack
One of the benefits of OpenStack as opposed to other cloud infrastructures is the fact that it is open and provides easy access to its underlying stack. We can, therefore, build Cloudify in such a way that utilizes some of the underlying OpenStack infrastructure, such as Heat for provisioning and Keystone for authentication.
We can also standardize the existing component of Cloudify, such as the recipe, by adopting TOSCA and Heat notions and templates.
This way we can make the migration to OpenStack even easier. Not only are we enabling our application to run on OpenStack, we also do it in a way that will better utilize the underlying OpenStack infrastructure.