- July 30, 2015
- Posted by: Trammell
- Category: Uncategorized
Most software deployments are more complicated than the actual application being deployed. Usually, when we talk about an application, we’re talking about something very simple – some server-side scripting language-based application and a backend database instance. However, when it comes to deploying this application, things become a bit more complicated, because of requirements like security, monitoring, recoverability and other such considerations.
On top of this, each deployment also probably requires coordinating any number of elements, such as:
- Hostname provisioning
- Network creation
- HTTP proxy
- Automation scripting
- Local repository mirrors
Cloudify is Pure-Play Cloud Orchestration. Get it now. Go
Many companies have invested in entire data centers and IT teams to systematically handle the deployment process, both of which are aging.
Deploying applications in the cloud is notably simpler – it takes both less time and less manpower, where freeing up both to work on more interesting things is the ultimate goal.
Many new products are entirely cloud-based and vastly more advanced than applications that aren’t yet five years old.
Most organizations are transitioning to the cloud often based on OpenStack environments, due to its open nature, but still have at least one foot planted in a traditional data center. They may be bogged down by bureaucracy, but they have a lot of brains on their payrolls.
These organizations are looking for something like this, albeit much more complicated:
The topology of the application is essentially:
- Application endpoints in two regions in an IaaS (such as OpenStack)
- Database hosted in an internal data center
- DNS server hosted in an internal data center
But certain expensive projects stand between reality and this end goal. For example, enabling DNS servers to interact with IaaS or moving application endpoints to IaaS. Each such project can involve several departments, approval processes, and QA tests at the very least.
Putting portable cloud orchestration tools in the hands of the “human swiss army knives” of the world will enable organizations to migrate from their aging hardware swiftly and systematically. Normally, orchestrators centralize the design and management of application deployments, which is akin to the importance of a project manager role in software deployment.
Such a scenario, again simplified, would look like this:
There are many orchestration tools out there, and each one has a unique approach to how it treats applications and application deployments. (And if you’d like to learn more about this, you can catch our team’s talk on just this from OpenStack Vancouver – “Orchestration Tool Roundup” – where they dive into the differences and synergies of the different tooling, and how to choose the right tool for the job).
The majority are cloud-specific, which means that once you invest in learning how to use one, you’re pretty much stuck to the cloud provider that comes with it. For example, once you are established with OpenStack, moving to Cloudstack would be a complicated and lengthy undertaking.
New tools are available that not only work on a multitude of platforms, but also work well together.
Docker is a wonderful tool because it enables you to create super-portable everything-included application containers, and their new tools Docker Machine and Docker Swarm, enable you to easily install Docker machines and clusters of containers in a variety of environments with simple commands. Essentially simplifying this even more.
Orchestrating with Docker is a nice way to work because with it you can roll up all of the logic of the application packaging, including its environment, and separate it from the infrastructure deployment.
In the next post, I will take the above and outline a practical example, and demonstrate how to orchestrate workloads in a hybrid environment – of traditional and cloud infrastructure – using Docker and OpenStack through a microservices use case.