Terraform vs Kubernetes

Scaling On-Demand Apps with Terraform and Kubernetes

One constant about technology is that nothing is constant: everything changes all the time. Once ubiquitous, monolithic systems gave way to service-oriented architectures; now, they are starting to yield to microservices. Scalability is the main driver of the change, but there are other factors at work as well. This article will explore how to build these scalable systems using Terraform and Kubernetes. Both of these tools are rising in popularity and you may wonder which one you should invest in. If that’s the case, read on.

In order to scale, on-demand applications must utilize elastic resources that can be brought up or torn down in a matter of seconds. From a very high level, there are two layers for implementing such solutions. The base layer is the physical infrastructure: computing power, networking, and data storage. The upper level is the workload orchestration that controls what is processed, where it is processed, and how it is processed.

The two layers are usually very loosely coupled, but there is some interaction between them. Certain workloads may require specific hardware, so both during infrastructure planning and during orchestration, those requirements must be properly addressed. We’re going to look at the tools that help us meet to these requirements at both levels: Terraform to control the infrastructure, and Kubernetes to orchestrate the workload.

Both of the tools can be used for orchestration and both of them rely on source-controlled configuration files to achieve that. Even though their scope can be overlapping, one cannot easily replace the other in most cases. When used appropriately, they can work in tandem to provide you with a full-stack deployment solution.

Terraform is a great choice if you want to control resources such as VM instances, DNS records, routing tables and generally all things low-level. It can help you manage your GitLab

as well, so its scope is pretty broad. Kubernetes, on the other hand, does only one thing but does it well. It focuses on managing containers and everything they might need to work properly. Everything else is abstracted to keep a clear line of sight on your target.

Let’s start from the bottom.

Version Control Your Infrastructure

How long will it take to move your service to a new AWS account? What would happen if traffic increases tenfold? Can you roll back a deployment when things go wrong? Or go back several versions?

If you keep your infrastructure as a version-controlled code, the above-mentioned concerns won’t pose any problem. Infrastructure as Code can serve several purposes. First, it’s a great document. Every piece of your service is laid down in text. It can serve as learning material or an accounting reference. It is also a basis for automation. One command is all it takes to launch your blueprint into production; or set up a new region, or scale an existing one.

Apart from proprietary solutions like AWS CloudFormation, the most popular tool in this category is Terraform. It’s vendor agnostic, features a powerful interpolation syntax, and includes good enough abstractions to catch the most obvious errors before they start modifying your resources. A sample EC2 instance will look like this:

resource "aws_instance" "web" {

  ami           = "${var.my_ami_id}"

  instance_type = "t2.micro"

  tags {

    Provider= "Terraform"

  }

}

Can you guess what the arguments mean? In case they’re not obvious, you can always consult the documentation which explains them in detail. What’s great is that Terraform is not locked-in to Amazon. You can use it with Google Cloud or Azure as well. In fact, there are many more providers available, such as DigitalOcean, Docker or GitLab. Many things can be automated with this approach.

Ship it in Containers

Have you decided how to ship your microservices-based architecture? These days, most IT professionals choose Docker containers. Even though they require some initial getting used to, there are two important advantages of using Docker containers. One is isolating individual applications by creating individual operating systems in which they reside. The other is making sure applications work the same way in development as in production. It spares you unnecessary surprises when you go live; and at the same time, it eases debugging if things go wrong later on.

Containers also demand a better architecture. Since they are separated, they need to be loosely coupled. This means there are several stateful parts (like a database or a file server), and everything else is stateless. And stateless is great for scalability.

How to Orchestrate a Workload?

Just like Docker rules the container world, the orchestration is normally handled by Kubernetes. But first, a review of what orchestration is. It’s the process of making sure applications run where they are expected and can handle the desired workload. For a typical web application, this can mean a load balancer running on the machine in public subnet, and one or more back-end applications running on private subnets. This may be augmented by one or more databases, in-memory cache systems, or message queues.

To use Kubernetes, you first need to have a Kubernetes cluster. For experimentation, Minikube serves as one. For production use, either set a proper cluster yourself or use a managed offering like the ones from AWS or GCP. They’re great for most use cases.

But Which is Better?

Both Kubernetes and Terraform can help you with orchestration and scalability. Kubernetes deals with Docker containers, so if your application isn’t yet containerized, it will require some preliminary work. Terraform operates on what can be seen as hardware, so it’s suited for any kind of workload. Even the so-called “legacy” workloads.

Even if you choose Kubernetes, the answer to which is preferable isn’t a simple either/or. Remember, the Terraform providers mentioned previously? There is one for Kubernetes letting you configure such things as persistent volumes. And if you decide to go the managed route, you can set up your GCP or AWS accounts to enable Kubernetes-as-a-Service.

Test-drive Cloudify’s Hybrid Cloud Lab Free!    Start Lab

But the most interesting case may be with specific hardware requirements. Say you need a particular GPU for some of your workloads. No problem! Provision it with Terraform by choosing the appropriate instance type, set up the cluster, and orchestrate the workload with Kubernetes. Who said you can’t have the best of both worlds?

Further Steps

The concepts discussed in this article may seem cool, but you are probably wondering whether it’s worth migrating to any of the proposed solutions. If you aren’t concerned about migration or scalability, the benefits may not justify your investment. But there are more possibilities once you start treating more of your resources as code.

One of them is Continuous Integration and Continuous Delivery. Just like you can test your application code, you can test your infrastructure, deployment, or orchestration code. You can catch bugs as soon as they are introduced. This can eliminate downtime due to broken deployments. No more bug hunting on a Friday night.

Security audits get easier as well. Logging into different machines, reading logs, running port scans, and making intrusions to the production systems can be partially replaced by reading Terraform files. And apropos security: Immutable Infrastructure can help with that as well. When you can regenerate your entire infrastructure any time you want, you can run your applications on read-only filesystems. Upgrading an application means deploying new instances. A read-only file system means one less thing for attackers to tamper with.

If you are interested in service orchestration, check out how you can leverage both Terraform, Kubernetes and more with Cloudify. The future of most businesses lies in the cloud, and it’s a good idea to embrace the change now.



Author: Ilan Adler
Ilan is a marketing data analyst at Cloudify. He works to help users understand the power of the Cloudify Orchestration frameworks, and how it can help fit in a variety of use cases from Edge Orchestration, NFV Orchestration, vCPE, Hybrid Cloud, and more.

Leave a Reply