The OpenStack Interoperability Paradox and How to Bridge It
Last week I had the honor of moderating with my co-presenter, Sharone Zitzman, our fourth OpenStack & Beyond Podcast. This time the topic was – Is OpenStack Really Ready for the Enterprise – with Lauren Nelson, a senior analyst from Forrester Research and Kristian Köhntopp, a veteran cloud architect from hosting company SysEleven.
The discussion was centered around the Forrester report OpenStack Is Ready — Are You?, an excellent research piece led by Lauren as well as on Kristian’s talk “45 Minutes of OpenStack Hate” in which he laid out his challenges from his experience in building a web-scale service using OpenStack.
OpenStack/VMware + TOSCA + container orchestration – the works. Get Cloudify. Go
Bringing both Lauren and Kristian onto this podcast helped to paint a fairly accurate picture of where OpenStack is today and what it really takes to make it enterprise-ready. The ability for a community and a research firm like Forrester to run such an open dialogue, exposing not just the positive side but also many of the challenges and difficulties involved with the adoption of OpenStack, is a sign of maturity in itself.
When we finished the discussion I couldn’t avoid thinking about what seems to be a serious paradox with the way enterprises are adopting OpenStack – let me explain.
In the diagram below which reflects the results from the user survey, it’s clear that 82% of users use OpenStack in conjunction with other clouds.
Similarly according to the Forrester report, enterprises choose OpenStack to enable interoperability and avoid lock-in:
An increasing number of large enterprises are seeking open source technology to launch this transformational journey. The goal is to avoid vendor lock-in and mitigate expensive licensing costs.5 Others see it as the promise of portability and interoperability of applications embracing a “design-once, run anywhere” solution — a reality that hasn’t come to fruition yet.6
At the same time the reality shows that OpenStack is used primarily to drive only new cloud initiatives as noted in the Forrester report:
OpenStack isn’t your only private cloud or virtual environment designed to be your orchestrator across your traditional workloads.3 Rarely does one place OpenStack in front of legacy or traditional workloads in lieu of a proprietary private cloud suite. In reality, OpenStack sits behind net-new environments designed to launch your enterprise into a revolutionized continuous development experience.
The OpenStack Interoperability Paradox
One of the main reasons enterprises choose OpenStack is to enable better interoperability and portability, however, the reality clearly shows that OpenStack is not there yet. If I add to that the current maturity of OpenStack, it becomes clear that there are some more fundamental issues that OpenStack needs to deal with to better fit the enterprise environment, making the chances that it will be ready to execute on its “design-once, run anywhere” promise a luxury at this point in time.
Bridging the OpenStack Interoperability & Portability Gap
The good news is that we don’t have to wait for OpenStack to solve the portability and interoperability issues. This is where the ecosystem can be fairly handy.
As I laid out in one of my previous posts, Cloud Migration in the Enterprise, there are a couple of techniques to address the interoperability and portability challenge starting from nested virtualization through API portability, through automation and orchestration. Quite often it would be necessary to combine a few of these techniques to achieve the best results.
In this post, I wanted to touch specifically on the portability between VMware/OpenStack environments, as this is probably the most common use case where such portability is needed.
VMware’s Approach to OpenStack Portability – VMware Integrated OpenStack (VIO)
VMware announced last year that it will join the OpenStack distribution war with its own OpenStack distribution on top of VMware infrastructure. This year they will be announcing VIO v2. This solution provides portability of the technology stack – i.e. if I’m a VMware user and I want to move to OpenStack to reduce my chances for lock-in, I can use VIO as a more gradual step in this direction. This will allow me to leverage the maturity of the VMware technology stack as well as the existing skill set that I developed in my organization to make this transition smoother.
As Lauren mentioned in the podcast, a large part of VIO is still OpenStack, and many of the pitfalls that were noted as a maturity gap lay within that layer, not at the virtualization layer or lower infrastructure layers. This basically means that I would still face many of the OpenStack maturity gaps if I choose to go down that path.
In addition, while this solution provides portability of the technology stack, it doesn’t allow me to take my existing application running on a VMware stack onto VIO, meaning I still have to take care of the portability at the application layer.
OpenStack & VMware Portability Using TOSCA–based Orchestration
Another approach to cloud portability is to use orchestration as an abstraction layer between the VMware and OpenStack environments. The main advantage of this approach is that it can work natively with each environment and provide abstraction at the application management layer. That means that I can “templatize” or “templatify” my application using TOSCA and deploy it on either OpenStack, vSphere or vCD etc. The orchestrator takes care of mapping this template into the underlying environment infrastructure.
Looking into the Future – Interoperability with Containers and Kubernetes
It is clear that containers are going to be the dominant layer for cloud native applications.
Container architecture provides a great way to simplify the portability task for those applications simply by the fact that the application packaging format is portable between OpenStack, VMware as well as other clouds.
While containers solve the challenge of application packaging portability, most of the solutions that rely on containers work on an environment that relies completely on containers and don’t fit as well with more heterogeneous environments in which containers are part of a more heterogeneous blend.
To address this gap we can use the same TOSCA-based container orchestration approach to extend our OpenStack or VMware environment into the next generation container-based environment, as outlined in the diagram below. What’s interesting is that we can also deploy the entire stack on plain bare metal environments in just the same way.
You can read the full details in Dewayne Flippi’s post which includes an example for provisioning Kubernetes on bare metal – “Cloudify Meets Kubernetes – Container Management & Orchestration on Bare Metal“.
Worth Noting New Portability Features
We’ve positioned Cloudify as pure-play, open source automation. I believe that right now, it’s the only open source orchestration that integrates natively with OpenStack and VMware. There are a couple of things that we’ve been working on lately to make this interoperability story stronger. The first thing to note is that we’ve released our vSphere plugin as part of Cloudify 3.2.1. which supports both vSphere 5.1 and 5.5. We also made a major update to our vCloud support which is a result of a long joint effort with VMware – expect a big announcement on this front next week at VMworld!
Wer’e also making many efforts to make it easy for users to leverage our TOSCA engine independently from Cloudify. We’ll be announcing more on this front on the upcoming OpenStack Summit in Tokyo..
OpenStack Silicon Valley here we come!
You can hear more in this regard this week at the OpenStack Silicon Valley event.
I will be speaking on the panel Theory vs Reality: Is Modern Tooling Helping or Hurting OpenStack Adoption?
It would be a great chance to meet up – ping me on Twitter – @natishalom.