Provision EKS clusters with Terraform and Cloudify

In the previous two posts, we discussed how to run Terraform on Cloudify Manager using different options, how to create a blueprint and make it generic.

In this article, we will focus on how to provision the Kubernetes cluster (EKS).

https://medium.com/@alexmolev/running-terraform-with-cloudify-ed3e998ae73d

https://medium.com/@alexmolev/running-terraform-with-cloudify-part-2-5279e70374ff

What you’ll learn

In this article, we will go through examples of how to provision EKS cluster with Terraform template on Cloudify Manager.

Prerequisites

You’ll need to have the following setup on your development machine.

Git — To work with our tutorial examples

Python 3.7 — To run Cloudify CLI (cfy)

Docker — To run Cloudify Manager container

AWS access key id and secret access key with permissions to provision EKS cluster. You can follow the AWS documentation on how to get those in the link

What is Cloudify?

Cloudify is an open source, multi-cloud orchestration platform featuring a unique ‘Environment as a Service’ technology that has the power to connect, automate and manage new & existing infrastructure and networking environments of entire application pipelines. Using Cloudify, DevOps teams (& infrastructure & operations groups) can effortlessly make a swift transition to public cloud and cloud-native architecture, and have a consistent way to manage all private & public environments. Cloudify is driven by a multidisciplinary team of industry experts and hero-talent. cloudify.co

Let’s Start

How to install Cloudify locally?

Running Cloudify Manager locally is very easy. You’ll need to install locally on your machine docker and run on it Cloudify Manager Community edition container.

To run Cloudify Manager Community edition simply run the following command. You might wait some time until the application is fully up and running

$ sudo docker run –name cfy_manager_local -d –restart unless-stopped -v /sys/fs/cgroup:/sys/fs/cgroup:ro –tmpfs /run –tmpfs /run/lock –security-opt seccomp:unconfined –cap-add SYS_ADMIN -p 80:80 -p 8000:8000 cloudifyplatform/community-cloudify-manager-aio:6.2.0

Need to wait a couple of minutes till application deployment is completed, you can observe the logs with docker

$ docker ps # to get ID of cloudify_container_id

$ docker logs cloudify_container_id

Congratulations now you can browse to http://localhost and login with default user and password admin:admin

To install Cloudify to Kubernetes cluster

Read this documentation:

https://github.com/cloudify-cosmo/cloudify-helm

It can be as simple as running this helm chart

$ helm repo add cloudify-helm https://cloudify-cosmo.github.io/cloudify-helm

$ helm install cloudify-manager-aio cloudify-helm/cloudify-manager-aio

Cloudify CLI installation

You have two ways to work with Cloudify Manager. You can work either with UI or CLI. In this article, we will work CLI.

To install Cloudify CLI run the following command:

$ pip install cloudify==6.1.0

Configure the CLI

$ cfy profiles use localhost -u admin -p admin

$ cfy profiles set –manager-tenant default_tenant

Installing Plugins

Before we start we need to install AWS and Utilities Plugin

To simplify let’s upload all supported plugins

$ cfy plugins bundle-upload

Installing GIT

We referencing public git repo in our blueprint, to make it work container must have git installed inside of container

$ docker exec -it cloudify_container_id bash

$ sudo yum install git -y

Creating secrets for aws_access_key_id/aws_secret_access_key

$ cfy secrets create aws_access_key_id –secret-string YOUR_ACCESS_KEY_TO_AWS

$ cfy secrets create aws_secret_access_key –secret-string YOUR_SECRET_ACCESS_KEY_TO_AWS

Terraform Template to provision EKS cluster

https://github.com/cloudify-community/cloudify-tutorial/blob/master/terraform/tf_11_eks_provision/template/main.tf

The terraform template has two files main.tf and outputs.tf

For AWS authentication we use aws_access_key_id and aws_secret_access_key

EKS

provider “aws” {

 region = “eu-west-2”

}

data “aws_eks_cluster” “cluster” {

 name = module.eks.cluster_id

}

data “aws_eks_cluster_auth” “cluster” {

 name = module.eks.cluster_id

}

provider “kubernetes” {

 host                   = data.aws_eks_cluster.cluster.endpoint

 cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)

 token                  = data.aws_eks_cluster_auth.cluster.token

 load_config_file       = false

}

data “aws_availability_zones” “available” {

}

module “vpc” {

 source  = “terraform-aws-modules/vpc/aws”

 version = “3.11.0”

name                 = “k8s-${var.cluster_name}-vpc”

 cidr                 = “172.16.0.0/16

 azs                  = data.aws_availability_zones.available.names

 private_subnets      = [“172.16.1.0/24“, “172.16.2.0/24“]

 public_subnets       = [“172.16.3.0/24“, “172.16.4.0/24“]

 enable_nat_gateway   = true

 single_nat_gateway   = true

 enable_dns_hostnames = true

public_subnet_tags = {

   “kubernetes.io/cluster/${var.cluster_name}” = “shared”

   “kubernetes.io/role/elb”                    = “1”

 }

private_subnet_tags = {

   “kubernetes.io/cluster/${var.cluster_name}” = “shared”

   “kubernetes.io/role/internal-elb”           = “1”

 }

}

module “eks” {

 source  = “terraform-aws-modules/eks/aws”

 version = “17.24.0”

cluster_name    = “${var.cluster_name}”

 cluster_version = “1.21”

 subnets         = module.vpc.private_subnets

vpc_id = module.vpc.vpc_id

 write_kubeconfig   = true

worker_groups = [

   {

       name = “ng-medium”

       instance_type = “t3.medium”

       asg_desired_capacity = 1

       tags = [{

         key                 = “instance-type”

         value               = “on-demand-medium”

         propagate_at_launch = true

       }, {

         key                 = “os-type”

         value               = “linux”

         propagate_at_launch = true

       }]

   },

 ]

}

Run blueprint to provision EKS with Terraform and Cloudify Manager

inputs.yaml

module_source: https://github.com/cloudify-community/cloudify-tutorial.git

terraform_template_location: terraform/tf_11_eks_provision/template

variables:

 access_key: { get_secret: aws_access_key_id }

 secret_key: { get_secret: aws_secret_access_key }

 aws_region: eu-west-3

We use it to pull aws_access_key_id and aws_secret_access_key from secrets we created and to execute the blueprint which runs Terraform.

module_source is a reference to a git repo with terraform template

terraform_template_location is a relative path for a Terraform template inside of git repo

Execute blueprint on your local Cloudify manager using CLI

https://github.com/cloudify-community/cloudify-tutorial/blob/master/terraform/tf_11_eks_provision/inputs.yaml

$ cfy apply -i inputs.yaml

AWS UI console -> check selected region selected is ‘eu-west-3’ region -> EKS -> Clusters

Success!

You can find all the code: blueprint | input.yaml | terraform templates in this public repository: https://github.com/cloudify-community/cloudify-tutorial/tree/master/terraform/tf_11_eks_provision

Thank you for reading this guide and trying our product!

comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top