Terraform vs Kubernetes

Scaling On-Demand Apps with Terraform and Kubernetes

When it comes to technology, the only constant is that nothing is constant. The industry continues to move at such a fast pace, continuously evolving. What began with ubiquitous systems and structures re-formed into service-oriented architectures. Today, these solutions are moving toward microservices. The change is primarily driven by scalability, though there are other factors at play.

See How Cloudify Can Help You Maximize Terraform and Kubernetes 

We’ll explore how to build scalable systems using two tools that are gaining in popularity: Terraform and Kubernetes. This article will explain some intricacies of these tools and if you’re trying to decide which to invest in, this is a good place to start in evaluating these two solutions.

On-demand applications depend upon elastic resources which can be brought up or torn down (in a matter of seconds) in order to scale. Speaking from a very high level, implementing these solutions requires two layers:

  • The base layer, which is the physical infrastructure. This includes computing power, data storage, and networking. 
  • The upper layer, which comprises the workload orchestration, and controls what is processed, and where and how it is processed.

These layers are typically loosely coupled, though there is some interaction between the two. Specific hardware may be required for certain workloads, and these requirements must be properly addressed both during infrastructure planning and orchestration. We’ll have a look at the tools that meet these requirements on both levels: Terraform for controlling the infrastructure layer, and Kubernetes for orchestrating the workload.

Both Terraform and Kubernetes rely on source-controlled configuration files to be used for orchestration. Although there is an overlap in scope, most cases don’t allow for one to replace the other. They can, however, work in tandem to provide a full-stack deployment solution.

Terraform is effective for controlling resources such as DNS records, routing tables, VM instances, and generally all low-level things. It can also help manage GitLab, so it clearly has a broad scope.

Kubernetes has one job – and it does it very well. It’s focus is on managing containers, along with whatever they may need to work properly. Anything unrelated is abstracted, helping to keep a clear line of sight on your target.Let’s explore some aspects of these tools more in depth, starting from the ground up.

Version Control Your Infrastructure

How long will it take to migrate your service to a new AWS account? What would happen in the case of a significant traffic increase? Is it possible to roll back a deployment if an issue arises? What if you need to roll back several versions?

All of these concerns are mitigated with one simple tactic: Infrastructure as Code. Maintaining your infrastructure as version-controlled code serves several purposes.

Firstly, it serves as a great documentation strategy by recording every piece of your service in clear text. This can serve as learning material, or an accounting reference. Launching your blueprint into production, setting up a new region (or scaling an existing one) are as easy as entering one command, making it a basis for automation.

With the exception of proprietary solutions such as AWS CloudFormation, the most popular tool for IaC is Terraform. It’s vendor agnostic, features a powerful interpolation syntax, and its included abstractions are good enough to catch the primary errors before they begin to modify your resources.

Here’s a sample of an EC2 instance:

resource "aws_instance" "web" {
  ami           = "${var.my_ami_id}"
  instance_type = "t2.micro"
  tags {
    Provider= "Terraform"
  }
}

Want to take a guess at what the arguments mean? If they aren’t obvious, Terraform has great documentation you can consult which outlines them in detail. Terraform also benefits from not being locked-in to Amazon, with the ability to also use it with Google Cloud or Azure. There are many more providers available, in fact: DigitalOcean, Docker, or GitLab, for example. A variety of actions can be automated with this approach.

Ship It In Containers

Deciding how to ship your microservices-based architecture? Most often, IT professionals go with Docker containers. Although they require a little bit of getting used to, they have two important advantages to using Docker containers. One is the isolation of individual applications by creating individual operating systems in which they reside. The other is the ability to check applications to ensure they run the same way in development as they do in production. This obviously saves you a lot of potential issues when you co live, while making debugging easier if things do go wrong at some point.
The nature of containers demands better architecture. Since they stand separately, they need to be loosely coupled. This means there are several stateful parts (database or file server, for example) and everything else is stateless, which is great for scalability.

How To Orchestrate A Workload?

In the same way that Docker rules the container world, in the orchestration arena, Kubernetes runs the show. But, let’s back up just a little bit and run through what orchestration is.

Orchestration is the process to ensure applications run where they are expected to, and that they can handle the desired workload. For a typical web app, this may mean a load balancer running on the machine in a public subnet, and one or more back-end applications running on private subnets. This may be enhanced by one or more databases, in-memory cache systems, or message queues. The starting point here is to have a Kubernetes cluster. Minikube serves as one for experimentation purposes. When you move to production use, set up a proper cluster or use a managed offering such as GCP or AWS. These are great for the majority of use cases.

But Which Is Better?

Kubernetes and Terraform are both capable of addressing orchestration and scalability. Kubernetes relies on Docker containers, so for DevOps who have yet to containerize applications, it will add a bit of preliminary effort. Terraform is suited for any kind of workload (including legacy workloads), as it operates on what can be seen as hardware.

Ultimately, the answer of preference isn’t a simple matter of either/or. Earlier we listed some Terraform providers. Kubernetes also has a list of their own, which allows for configuration of things such as persistent volumes. Should you decide to go the managed route, it’s possible to set up your AWS or GCP accounts to enable Kubernetes-as-a-Service.

Perhaps most interesting, is the case of specific hardware requirements. Need a particular GPU for specific workloads? No problem at all! Use Terraform to provision it by selecting the appropriate instance type, then set up the cluster and orchestrate the workload using Kubernetes. Consider it the best of both worlds!

Further Steps

So, these solutions are conceptually pretty cool, right? You are likely still weighing the practicalities, and wondering whether it’s worth migrating to these solutions. If migration or scalability aren’t concerns for you, the benefits may not outweigh the investment. There are more possibilities, though, once you begin to treat more of your resources as code.

Take, for example, Continuous Integration and Continuous Delivery. Just as you can test your application code, you can also test infrastructure, deployment, or orchestration code. This enables you to catch bugs as soon as they are introduced, and eliminate downtime due to broken deployments. Save yourself some time and energy of hunting for bugs in the middle of the night!

Security audits become a breeze (relatively speaking) as well. Rather than logging in to individual machines, reading logs, running port scans, and making intrusions to production systems, you will be able to read Terraform files instead. Also pertinent to security, Immutable Infrastructure comes to help. Regeneration of an entire infrastructure on demand enables you to run your applications on read-only file systems. The upgrade of an application means deployment of new instances. Having a read-only file system means one less point for hackers to try to tamper with.

If service orchestration is of interest to you and your organization, read more about how you can leverage both Kubernetes and Terraform (amongst others) with Cloudify. The future of business is in the cloud, and Cloudify can help you to embrace the change. You can even take a deeper dive into both Terraform and Kubernetes via the Cloudify ‘Tech Talk’ podcast. 

 



Leave a Reply

Founded in 2012, Cloudify has robust financial backing from Intel Capital, VMware, BRM Group, Claridge and other leading strategic and financial investors. Cloudify has headquarters in Herzliya, Israel, and holds offices across the US and Europe. Cloudify specializes in network orchestration, network automation, everything multi-cloud; providing orchestration solutions for expert orchestration software.