Podcast | Episode Two : Kubernetes & More

The second episode of the Cloudify Tech Talk podcast takes a plunge into Kubernetes and of course examines how it fits into the Cloudify world. Click HERE to see Cloudify and Kubernetes integration in action (supporting material for this podcast).




 Guys. Welcome to episode two of the Cloudify tech talk podcast. Today we’re going to take a very cool deep dive into Kubernetes and in particular how it relates to Cloudify. We have some, we have a regular speaker and my co-host and some very special guest speakers waiting in the wings. And just a quick note before I get started any supporting material will be located at cloudify.co/podcast. So, without further adieu, I’m going to hand over to my co-host, Ilan Adler, who will take one step forward and introduce our speakers for today.

Thanks, you so much, Jonny. Welcome everyone. Today we’re going to be discussing the, one of the most famous topics around; Kubernetes. We’ll discuss some Kubernetes adoption, adoption patterns, what we’re doing with Cloudify with Kubernetes and so forth. I’ll quickly introduced our panellists which include Nati Shalom, the CTO and founder of Cloudify. Alex Mueller, the Cloudify director of RMD. Tramell the duke of product, and Josh Cornutt, who is one of our senior solution architects on developers. And I want to hand off to Nati with, with a couple of, start off with Kubernetes specific questions. Nati, Kubernetes continues to be more and more adopted worldwide in organizations all over. What’s making Kubernetes so successful? And to add on to that, another question that I had that’s asking is Kubernetes, what’s the way forward to Kubernetes? Will it become another OpenStack or will it outlast? So, to say.

Being here active on OpenStack I think there is a, this is a very good comparison because I think there are a couple of metrics that you could measure on a project to actually see if it’s going to be you know survive the curve or not the Kazem as it is called. And I think one of the issues with OpenStack back then was that it was originally created as an alternative to Amazon and open source alternative to Amazon. And the lack of a public cloud adoption clearly was kind of a dead end in many ways. And so, it created some sort of a platform that didn’t have enough support from the, definitely no support from the public cloud and the other, none public cloud vendors were not really able to push something like that an alternative to public cloud. I think that’s kind of where OpenStack started to lose its ground in many ways. I think the case with Kubernetes is very different. I mean, it was pushed by a public cloud provider, which was Google initially, even though OpenStack at Rackspace, but we all know where Rackspace is today. So, it’s part of the answer. And Google had adopted it as its own platform, so it was kind of [ Inaudible03:14] or whatever. And that led to the faster maturity cycle of the platform itself because it was those big giants that were actually using it and serving it and providing a solution around that. And later it’s been adopted by Amazon and Azure. And I think that’s kind of put a very clear mark that this is here to stay for a while and you can see the adoption statistics that it’s pretty solid from that regard. So, I think it’s going to be there for a long while. I do think that it’s not going to be a solution for everything. Like any, anything new, there is this golden [Inaudible03:54] kind of a syndrome where, you know, you something shiny come to the world and then everyone tried to bring everything into it. So, I think that realization will happen more than it is today. But it’s there to stay for a while. So, we’re going to see it growing in the next couple of years for sure. What’s coming up next? You never know, but I think it’s going to be there for a while.

Excellent. And in terms of that, I want to expand on some of these Kubernetes adoption patterns that we’re seeing. And then I think one of the sources of the recent CNCF, the 2019 survey of Kubernetes I want to hand that off to Nati, and Josh, Josh who has quite extensive experience. Like what are, what are these adoption patterns? Like where, where according to the survey and what are we learning from the survey? Talks about the state of Kubernetes currently.

 Yeah. So, I’ll start and Joshua feel free to join. I think what we’re seeing also in the survey itself and that’s quite interesting. Kubernetes gone much more widely adopted by public cloud. So, it shows like almost 60% of the adoption of Kubernetes is now running on public cloud, which is interesting. It wasn’t really the case a couple of years ago and public cloud still remains dominant and hybrid cloud as well. So, if we somehow, put hybrid cloud and private cloud, we’ll see that using both private and public is still the bigger portion of that. So, that’s probably closer to kind of a bigger ratio than only public cloud. So, so I would take from that, that there is more than one cloud or public or private. And most organizations do have a combination of that. If I kind of read the survey correctly. Now Josh.

 So, yeah, I mean some of these, some of these numbers don’t quite add up to me or at least they’re not a fair comparison in my opinion. Where, you know, we’re talking about companies and their adoption of Kubernetes and containers is in general, so it’s really hard to pit something like Docker desktops adoption against something like Amazon EKS, right. They’re just not the same thing. One is geared towards the consumer space, you know, the just daily users may be some DevTest type of stuff while you know, things like GKE or OpenShift are really geared towards enterprises production environments. So, so some of these seem a little skewed to me and also you, you know, without seeing Rancher on here or some of the other ones just makes me think that the comparisons just not as correct as it could be. When we’re also talking about some of these tools on the graph here, like kops and kubeadm and mini kube. Some of these are deployment tools. Some of these are purely for DevTest and definitely not for production. And also, with this being a 2019 survey, things like Red Hat OpenShift that they, they’re lower on the chart than they probably are now with the release of OpenShift for you know, the new operator hubs, the easier deployment tooling and management. So, I would definitely expect this graph to be or these graphs to be exTramelly fluid over the next year or two.

Some of the other things, that I wanted to discuss that came up with these. So, in terms of what other clusters are there and most specifically, I think a common question is how typical is it actually for an organization to have more than one Kubernetes cluster? And are you guys seeing any clear-cut combination, public, private managed service to yourself Kubernetes platform that’s looking out in the lead based on this specific survey?

 So, personally, I work with Red Hat a lot, so I just by design, see a lot of OpenShift in environments. I actually don’t work with a lot of companies that are using a hybrid approach. Like I don’t see a lot of companies that run a big production cluster as well as using a cloud or two worth of services. Usually the ones I see are like a combination of you know, Amazon, Azure and Google, or they have some on-prime stuff such as you know, straight up vanilla Kubernetes, or you know, they’re doing Rancher and OpenShift for their different environments. And so, I don’t, I personally don’t see a lot of this hybrid approach where you have a local and a cloud or a couple of cloud services of Kubernetes.

– Nati, do you want to add anything about anything that we’re seeing with cloudify users in this space because we have a couple of users now that are also with OpenShift anything that you want to add there?

– Yeah, we definitely see, especially in big enterprises as OpenShift as Josh mentioned, being the main leader in the private cloud space and according to the statistics as well, we’re seeing that the adoption in public cloud becomes pretty high. What surprises me is to see that you could see that Amazon is kind of taking the lead over a GKE, which is Google Kubernetes, which kind of surprising. It looks like they are obviously starting late and would be the nice catch up game with a Google in that regards. So, that’s a kind of a one surprise. The other one that if you look at the report, we could see that Azure have both their managed service, but also, they provide their equivalent to OpenShift kind of a platform, which is called the engine. And if we sum it up, we see that they’re covering quite amount of the workload. So, they could, if you sum it up, they could be the lead. So, it’s not that easy to read it, but I think the picture is that the public cloud vendors are clearly the, they own the lead. And then on the private cloud OpenShift is, is pretty much on the lead. That’s kind of [ Inaudible10:13]

– Right. Yeah. And organizations we’re talking to are usually aligned with that. And that’s actually a good lead in for for the next question. The million-dollar question if you will. When to use or not to use Kubernetes, this is obviously coming up a lot, but especially with a lot of users that we’re talking to they have lots of legacy VM based environments and one move to transition to cloud native. So, if you guys want to address that.

-Yeah, so my take on that is that as I said earlier, I think Kubernetes is probably a generic platform. It was originally built to address large scale deployments. That’s what it meant to be. And the entire architecture was really built to that. Not every application actually fits into that definition. And what we’re seeing is that there’s a lot of those horror stories that you see about or you know, people moving from the monolithic application into microservices and Kubernetes. And finding out that they moved from, you know, something that was big and simple in many ways to something that is now spreading across many servers and it’s, they don’t have enough tooling even to manage that. So, if you do it in this way you’re probably gonna fail or at least experienced a huge amount of inefficiency in the process, and frustration in the process, until you actually got to see the benefit, and there is an interesting blog that I was following at the time [Inaudible11:50] if you’re moving from monolithic to microservices, that initially you’re going to pay huge amount of cost on the user experience and the complexity and all those types of things and it will take you a good amount of time and effort until you actually got to see the benefit of that. So, I’m not talking about the Greenfield, obviously I’m talking about moving from monolithic to Kubernetes and saying that not everything needs to be moving from this monolithic model to Kubernetes. Interestingly enough, also in the case of Cloudify, we kind of keep the two models because we do see a case in which having a single container that runs everything, we call it the only one that kind of, it’s still useful. It’s much easier to run. You download it, you run it, you don’t have to have anything depending on it. And for full cluster production, we kind of moving towards these managed clusters that you can run on Kubernetes and that makes sense to run it as a microservice. So, we kind of evolved towards this dual model, not just say everything needs to be now microservices and Kubernetes, but actually providing the two flavors as a way to go in terms of the user experience. And again, I’ll hand over to Josh and Alex who have experienced with that, with other customers. What have you seen so far?

 So, the customers that I’ve seen have the most success with moving over to you know, a big focus on Kubernetes and containers are the ones that you know, actually have a need for it, right? It’s not, it’s not just an exercise in taking a VM and, and you know, stuffing it into a container, right? That benefits pretty much no one. The thing that’s really important is if they have a service that needs to scale right, where it needs to scale down to you know, almost zero or even zero, like if they can just need to turn it off at some point. That’s where things like function as a service and this quote unquote serverless come into play. These are good cases and if their application is already built around the concepts of microservices, well putting those in containers you know, can save the company a lot of, of headache with making things immutable and just integrating into CICD as opposed to having a bunch of you know, pet VMs to maintain and patch and all that kind of stuff.

– And that’s something I think what we can actually see that Kubernetes provided lots of different interesting solution. If it’s either a company’s looking at it as it fast to implement into CICD that it, we can deploy constantly very easily. If it is a scanning functionality that, in on demand, we can actually scale the application. So, it’s very eager to companies to look at from the top level and see, okay we have this functionality and we want to address different topics, for example, as Nati says for analytics or machine learning and stuff like that. And what I can see from all kinds of resources that those are the new areas for Kubernetes, and if usually where Kubernetes were more for a regular application like web application and that it can became like a had another challenge like stateful applications as a database or any, for example, CAFCA. And so, there was another challenge that a Kubernetes community had to solve and edit all the persistent volumes. So, if we can escape scaled and the next thing, what I can see now at least that the high topic, hot topic, it’s about actually a moving towards all kinds of machine learning. So, there are like lots of the community actually tries to provide what will be the best way to solve those challenges with the Kubernetes clusters in the microservices and solution where it’s very scalable. So, to keep this functionality. And it’s very interesting from Kubernetes community, if there will be actually will managed to solve it and how efficient it will be.

– Okay. So, thank you guys. And I think one of the key sorts of lessons from this is we’re getting a lot of questions. Like what’s the difference [Inaudible16:21] to use Kubernetes verses Terraform, or Kubernetes or should I go to serverless now? Which is becoming more and more, and Kubernetes verses VM and SaaS, et cetera et cetera. What’s leading to all these questions. I think our main point what we’re seeing is as Nati mentioned before, no golden hammer. You want to add anything on that Nati?

-Yeah, of course. I think this is again, a part of the regular confusion that we’re seeing in the industry and not just with Kubernetes, with anything. We’ve seen it in the past with Docker. We’ve seen it in the past with OpenStack. Where OpenStack started, if you remember, everyone was asking, why do I need anything else if I have OpenStack? And similarly, with Amazon and so out of the gate I would say that the world is going to be hybrid. That’s our philosophy, at least that’s my belief is that there’s not going to be one platform to win then all. And I think if you look at the workload spread that’s actually getting more and more spread as if you kind of stretch it over in a trend of line. It’s not like the world is consolidating into something. It’s actually consolidating maybe, but at the same time it’s spreading as well at the same time. And we can see that also some elements of the survey itself. And the other part which I think is heading to that confusion is that there is this thought of a transformation. Like you’re moving from one thing to another and it’s a one big step and you’re in a safe land, and this is obviously not the case, it’s a continuous transformation. You never really stopped transforming and when you move it’s not just moving from your legacy application into [Inaudible18:08then that’s it. You moving from controlling your application to not control the application, like consuming SaaS services for example. So, that part is also taking place. So, you are kind of taking less and less control over software that you use to manage to consuming things that someone else is managing for you. The other thing is there is serverless for example, that Amazon came up with that first idea, but it’s starting to get higher and higher adoption and still if you look at these survey, we see that the adoption of serverless is through a managed service, not necessarily by running it in your own Kubernetes. So, yeah, you could run serverless within Kubernetes, but most people when they use serverless, it’s actually to not manage anything. And so, we’re seeing high adoption of a managed severless. And that means that you’re not going to have just one Kubernetes cluster that will manage serverless and non-serverless. You need to somehow interface between the two. And obviously the, I think if we look at the other orchestration like Terraform and Ansible and others, it becomes clear today that there is no one platform to win them all. And if you look even in Terraform adoption, that was a lot of questions around the comparison between why should I use Kubernetes if I’m using Terraform? Why should I use it? And we’ve seen it also with Cloudify. And I think there is this realization that each still is good for its own purpose and for its own domain, as it is called today. And the reality today looks more multi-domain type of orchestration. That’s, I would say the second dimension or the third dimension from my description. The fourth dimension is I think we’ve discussed it. There is no one Kubernetes cluster, there’s many Kubernetes clusters. So, even in that case, we’ll see different variation, different flavors, different type of innovation that will be coming for the edge, for the private cloud, for the public cloud, for machine learning. As you said, I’m sure there’s going to be Kubernetes platform that will be coming with a special stack towards that. So, we need to be ready with a, to do this diversity, and not trying to kind of assume that our world would be perfect. And then face the delusion or the disillusion when we realize that this is not really the case. Maybe Josh and Alex or Tramell. Do you want to add anything?


-Yeah, I will gladly add. I think one of the things we can see if you’re comparing to Kubernetes, Terraform, Ansible, serverless, and those are like [Inaudible20:44] tools. And each store has its own benefits and disadvantages. Right? And I think that when they, you’re looking at each tool, it’s very hard to say which one will win over the other. But I think like once you approaching some challenge to solve, to solve, then at least you need to find the right tool for your challenge. And I thing like that each one addresses it different, right? And so, there are like a, for example, if you are looking at Kubernetes and severless, right? There are pros and cons. Serverless one is like no DevOps. The other one provides a great tooling for deploying, scaling. If we’re looking for the Kubernetes versus Ansible, not all organizations, even there are having the right application, the right architecture to go to Kubernetes in the first place. So, they need some other tooling. And it’s not very easy to move from one of each, a texture to the another one. So, we will still have, and for at least that’s my assumption for a while and seeing those tools leading the industry as each one tries to solve a different challenge or perspective.

-Thank you guys so much for that feedback on Kubernetes. I want to move ahead to more of the Cloudify integration. And basically, the main question here is how does Cloudify integrate with Kubernetes? What are the different areas of Cloudify integration with Kubernetes? Obviously, what is it good for? And Alex and Tramell and Nati, you guys can feel free to answer that. Alex started working a lot of running Cloudify as a Kubernetes service. If you can enlighten us the work that’s going on there and how we’re sort of approaching that and what’s going on.

-So, regarding [inaudible 23:10] and running cloudify from a net service. We already have a simple all in one solution and a container that you can then deploy it on top of Kubernetes as cloud is running. And so, it gives a, like a very simple solution. We were still moving forward as our only one solution is still a stateful container. And we are moving towards a more scalable option where we can lift a provision, upscale up the couple of containers that will actually work, so we can then distribute the load between them and actually can scale as much as possible. That’s something that are on our roadmap and it’s more so there is a long way we’re going to do towards a more microservices, net microservices where we can scale each component easily in everything. We’ll just communicate smoothly and distribute the load as it comes.

– And can you touch on the privilege non privilege thing and what are we doing in that regard?

– So, currently our solution for all in one style cluster, a container is a, that container should run with the privileged access with their route, the access on the, on the node it’s running and of course for a Kubernetes from a security perspective and moving forward it’s something that we understand that it’s not way to go. We already have a client that we did some initial work to actually immigrate the container to non-privileged and it’s something that already done. For now, we are on our roadmap in the short term is actually to make it available for our next release for all in one container to be non-privileged.

– Yes. So Tramell has been with Kubernetes a while with your Cloudify, can you explain some of the history of Cloudify with Kubernetes current integrations, how it works, what’s a good for, and how we’re sort of continuing to enable that integration and allow our users to do more and more things with Kubernetes?

– History of Cloudify with Kubernetes goes back three or four years, when we first developed, we developed our first Kubernetes plugin. The goal of the Kubernetes plugin is to allow users to manage Kubernetes resources from their Cloudify blueprints. So, for example, a user might want to expose a service in Kubernetes as part of a workflow that they are running. This plugin expanded to enable users to, in order to test the plugin, we decided to support a method of deploying in Kubernetes clusters, which we started off just by using kubebot at the time to bring up Kubernetes clusters. And then we decided that the most interesting way to use Kubernetes with Cloudify might be to have cloudify be the infrastructure manager underneath Kubernetes. So, we developed a Cloudify provider for Kubernetes that basically enabled users to define generic blueprints; infrastructure blueprints that could then provide Kubernetes with some of its cloud functionality. For example, defining in a cloudified blueprint, either Amazon storage or OpenStack storage, and then creating, let’s say, a deployment called Kubernetes storage. And then just tell Kubernetes that whatever it needs storage go to the Cloudify deployment storage, and it will provision for you a storage that you can use, or same thing for load balancers or additional nodes in your cluster and things like that. So, it was a pretty cool tool. But over the last couple of years, we’ve really seen people move towards less from an unmanaged Kubernetes, to manage Kubernetes and more people looking for a GKE or AKS or EKS or OpenShift to provide them with a managed Kubernetes service. And so that integration point kind of has been abandoned. And today we are focusing, we still support users deploying their own Kubernetes clusters, but it’s usually to one of these managed services. The exception would be if a person wants to just install a bare metal Kubernetes or Kubernetes cluster in order to run tests and in one of the clouds than we have, we still offer an unmanaged Kubernetes infrastructure solution. But that’s that. So, that was the Kubernetes provider and the Kubernetes blueprint. Today our Kubernetes plugin has definitely extended in functionality. The goal of the Kubernetes plugin is to basically allow users to bring their existing Kubernetes resource definitions, their existing Kubernetes templates and just drop them inside a blueprint and start running their blueprints. So, this is kind of what we focus on with Kubernetes today.

-Excellent. And I would add that there is a couple of other initiatives that some of that, we’re going to cover it today, some that we’re not going to cover today. One of them is the CAC integration which Josh will describe [Inaudible29:45] what we’re doing in that space. And the other one that Alex will cover, which is how we making the Cloudify manager itself a more cloud native and more integrated within Kubernetes. It’s already there to some degree, but there’s more work to be done there. And he’ll explained the roadmap and what we’re specifically focusing on. There is other, not directly related stuff that we’ve been working in the innovation side of things. A feature that is called service worker, which is a way to run Cloudify as a broker to extend Kubernetes in such a way that you could use Cloudify to an extent when it is to manage non-Kubernetes workload. We’ve done and we announced it last year, performance optimization basically ability to run our intensive workload on Kubernetes by to a collaboration with F5 and Intel kind of showing how you could use a hardware acceleration, like offloading SSL cryption into the hardware and showing that there is also work that we’re looking into right now, which is the integration with [Inaudible 30:50 ]. So, there’s a lot of efforts on those activities within Cloudify and outwards, better integration with Kubernetes. And I think Ilan, this leads us to the next topic here.

-Yeah, thanks for that Nati. Indeed. So, what’s the deal? Is Cloudify, is it trying to be just yet another Kubernetes distro? Let’s start with that.

-Yes, maybe I’ll jump into that. Alex. I think the answer is pretty clear. It’s no that’s a pretty easy answer. And I think Tramell also touch on the history and the evolution of that. And now we’re more into integrating with another distro and becoming a distro. And the main value that we’re tackling or trying to address is really the integration, allow users to plug into different Kubernetes clusters. Allowing users to create an end to end automation that involves Kubernetes and non-Kubernetes workload through the list of option that I think we’ve touched on. That’s kind of where Cloudify is focused less on managing Kubernetes. So, becoming a kind of an equivalent to Rancher or equivalent to OpenShift or whatever. That’s not our focus.

-What platforms are currently supported today by Cloudify. And we already touched on what’s changing the support, but for example what are the platforms and what happens if a user comes to us and they’re telling us, okay, we’re already coming. I have existing Kubernetes clusters. How does that exactly work, and the supported by Cloudify?

-To give you a quick answer, Cloudify supports, GKE, EKS. AKS and [Inaudible32:27]. We’re gonna do some clusters and we just added support for OpenShift. The primary issue of integration with these managed services is the authentication options that the API stands. And so, we have special education implementations for each of those.

-So, Josh, maybe you could describe the differences between the platforms, and what was the challenge that we have and how do we address the integration between the different platforms? [ Inaudible33:05]

Sure. I think in a perfect world, they would all have, you know, the same exact API request and response behavior. You know, they would just be different flavors but have all the same support and requirements, but that’s not really the case right now. So, like Google has some special requirements about how you create a public service. OpenShift has extended types on top of Kubernetes that it happens to preference and you know, when we’re talking like routes versus Ingresses. So, in order to make that transition, so like if you have a workload and an on-prem vanilla Kubernetes cluster and you want to then take that same deployment and move that into Google or into an OpenShift cluster. There are some minor tweaks you have to do there. And when you’re doing it by hand, it’s not a big deal. But when you want to automate the process where you’re taking your workload and then you’re deploying it in one area and then automatically changing where you’re deploying it, those little nuances can be very annoying. So, what Cloudify plugin tries to do, and what our example plugins show, is that it smooths over some of those you know, differences between the environments, so that you can just take your workload deployed here, then deploy it there without, you know, thinking, Oh, do I need the, do I need to load balancer, you know, type of service or ingress or not, you know, what’s the, what’s the quirks of this provider versus that provider.

-Excellent. And we’ve been experiencing actually with a, when we’ve done the [Inaudible34:55] CACD on the, on what that actually mean and how to do that. Can you elaborate on that work?

Sure. So, you know, the demo I just did was you know, basically taking Jenkins and taking a container service, right? Or rather a deployment and pushing that into a user defined environment. So, the user can select between, you know, OpenShift, Azure, Google, and it’s the same workload, but it gets deployed as a proper service, with proper you know, public exposure, ingress rules and all of that. But to the user, all they see is, you know, an options list. I want to deploy it in Google or OpenShift. Right. And Cloudify takes care of any of those little nuances to make sure that it deploys correctly in the target environment.

Excellent. So, this fits well in my view, and correct me if I’m wrong, on the survey that we’ve seen before, that, you know, users do have a combination of clusters and more than one clusters and the combination of things like mini kube and for development and SaaS space, they’re managed for public cloud. Would that fit into that or is that, do I need kind of a completely separate Cloudify for each one of them or how does that work?

– No, no. You shouldn’t be able to have just one Cloudify you know, assuming networking allows you to do such a thing. But as far as like, you know, actual integration with different flavors of Kubernetes or different Kubernetes vendors that are out there yeah, no, you just need one, one manager. And, I mean, that’s pretty much it. So, whether it’s a mini kube or a Rancher deployment or OpenShift or whatever it is. Yeah, you don’t need like a separate version of Cloudify or different you know, support per environment.

-Right. so that’s maybe back to the integration that we have with Cloudify I know that we have two layers of integration. I just wanted to clarify that. I think Tramell and Alex about the difference between, we mentioned plugin and you also mentioned blueprint. What is the difference integration point, what each one of them is and how much they’re mutually exclusive? Can you elaborate on that Tramell?

-Look for anything Cloudify is a way of describing the infrastructure or the resource typology that you wish to deploy. For example, an example of a simple typology would be an application that’s contained inside a, if this would be a very simple blueprint and addition inside of blueprint, there is a kind of like subset of the DSL that allows you to specify where certain code maybe found that can execute or implement the typology that you’ve described. In order to kind of make the user experience a little bit easier. And also, to make the blueprints more powerful. We have a concept called plugins, which enable you to use custom types inside your typology that are, whose code implementation is contained inside of plug. And so, to sort of bring this, you know, as Stripe description back around we have a Kubernetes, a group of Kubernetes blueprints which install Kubernetes cluster and GKE, or AKS, or EKS, even bare metal. And inside of the blueprint we have several different resources that we’ve described. For example, Kubernetes, is master and a Kubernetes master VM. Kubernetes node and a Kubernetes node VM. And then we also have groups that enable us to create multiple instances of the node VM. In addition, we might include things like security groups or networks or nix or storage classes and stuff like that. In addition, we have a number of nodes that implement stuff like the API server in order to let users can, or the dashboard so that users can have a Kubernetes dashboard as soon as they’re finished installing their cluster. And then on top of that, we’ve also added resources that actually used the Kubernetes plugin inside of the Kubernetes [ Inaudible39:49]. So, we have a sanity test at the end of the blueprint that basically enables the user to check, so that the workflow, when you install your Kubernetes cluster the sanity node will check that the cluster is actually working. So, it uses the Kubernetes plugin to install and lead a Kubernetes resource, like a simple Nginx application or something like that.


– Something I would like to add, it’s not like a Cloudify will lock you up to work in one way with the Kubernetes. You’ll still use the same Kubernetes files, a YAML files if you used to use like deployments and the services, the ingress or other resources. With the blueprints and plugins, what actually provides you a capability to collaborate it. So, once you decide to, you don’t want to use the Cloudify or you want to deploy the same files on a different infrastructure so you can just swap easily without any difficulties and they use absolutely the same files in the same way you used to work, and you’re comfortable to work.

-That actually brings an interesting point here. I think one of the things that I’ve learned recently I must say, is that we’re not just passing the file as a static payload. We’re doing some rendering and before we actually send it, so maybe Tramell you could describe a little bit about this mechanism and why are we doing it?

-Sure. So, there are actually two things that, two ways of kind of handling variables from a cloudified blueprint over to a Kubernetes resource template. The first method would be to write your Kubernetes template as a jinja template. I mean that you would use jinja templating language inside of your resource for things, say, passwords you want to pass secrets, you want to pass versions of a particular application that you want to install and stuff like that. This would be one method of passing parameters to your Kubernetes resources, is just these jinja template. And then what happens is you can insight your blueprint, you can pass the path to your template file, and you can also provide a dictionary of parameters that you wish to pass to jinja. The other method, which is if you don’t know Cloudify probably sounds very similar, would be to put the entire Kubernetes resource YAML template inside of the blueprint. And if you use this method, you’re actually able to use inside the Kubernetes template, Cloudify, DSL and intrinsic functions like get that attributes or [Inaudible43:09], and stuff like that.

-Excellent. So, that means that you could actually render the file before it’s actually being sent and passing context information I have that can be derived also from services outside of Kubernetes because Cloudify kind of covers those other services that I can take an IP address of a VM or a database URL from external resource and pass it into this and it will be rendered into that. Then being shifted into the Kubernetes cluster. Right Tramell?

– Yeah.

-So, what if I want to update that file? Let’s say that I have passed in a template through a file and I’ve updated it, how does that work?

 So, we have a workflow that enables you to update the template file or enables you to reload the template file after it’s already been the equivalent of using Kubernetes apply basically. We have a workflow that enables you to run the equivalent of Kubernetes [Inaudible44:11].

-Right. And that will in turn we’ll update the service and we’ll continue to working as in previous case. What was the latest updates on the latest plugging? I mean, what was the new stuff that was actually released?

– So, we kind of improved the user experience on this reload resource story. We also added support for, or improved our support for resource status verification. We added a bunch of new resources and we also updated the, maintained our support with the most recent versions of Kubernetes.

– Great. So, I think that’s kind of a bring us to the end, Ilan. So, maybe you can do any wrap up.

– Yes. Thank you so much guys. We’re just about out of time. I think this is a topic and we’ll probably revisit this in the near future as well because it’s so extensive and especially what’s on the work and Nati describe [Inaudible45:20] and other service brokers and all that. So, thank you so much for joining us. I invite you all to check out all the resources that we have on Kubernetes on the site. And we have some ready-made blueprints that allow you to get started working with Kubernetes with Cloudify very, very quickly. I handed it back off to Jonny for our outro.

-Thanks so much, Ilan. Yep. By all means, stay tuned in a couple of weeks, we have another episode coming your way dedicated to serverless support. Yeah, so once again, thanks to every single person who has helped create such an awesome second episode of this podcast. As I stated at the very beginning, please go to cloudify.co/podcast where you can find all supporting materials for everything that was mentioned in this episode. Also, please go to the cloudify.co website because there’s loads of supporting information that dedicated to Kubernetes and beyond. So, guys, this is me signing out, and thanks to everyone and stay safe and we’ll see you soon.




    Leave a Reply

    Your email address will not be published.

    Back to top