Podcast | Episode Eight : Version 5 & Beyond!

This session takes a look at the strategy behind Cloudify’s version 5 cycle, the pain points its aims to solve, the new features presented in the latest 5.1 release, and the roadmap for 5.2 and beyond. Also –  GIVEAWAY! Get a chance to snag a free Kubecon pass – listen to the podcast to learn how.


Supporting Assets:

Blog: Vision and strategy behind Cloudify 5
Version 5.1 Release Notes
Webinar sign up (+ Kubecon giveaway)



Jonny: Guys, welcome to a very special edition of the Cloudify Tech Talk podcast. Now it’s just family for this session as we delve into Cloudify’s latest release 5.1, and the absolute powerhouse of new features it provides, and more importantly, the vision and strategy behind it. And of course, we’re going to be talking about where we’re heading as well. So without further ado, I’m going to hand over to my colleague, Ilan Adler, to tell you all about it.


Ilan: Thank you, Jonny. I’m pleased to be here and I’ll sort of moderate today’s questions and some of the features. Obviously, we’re joined today by Cloudify’s CTO and founder, Nati Shalom as well as Alex Molev(?) the head of the R and D team from Cloudify. Alright, so to kick things off, obviously, we’re talking about today Cloudify 5.1, that’s our latest GA release but what I want to sort of start off with is, and I’ll send discussion over to you, Nati, is really what’s the division and what differentiates Cloudify five in the family of five versions of Cloudify from previous Cloudify versions? And if you can delve into that.


Nati: Yes, sure. I think that, you know, if we look at the evolution of Cloudify since it’s started, we really started Cloudify as a multi-cloud orchestration. Back then the industry itself was very much, cloud was pretty new and we were very much focusing on infrastructure, you know, like how to launch a VM on multiple clouds and how-to, you know, deal with private and public cloud; all those sorts of things. And later, we were also focusing a lot on cases of how to do network automation in those types of environments and there’s been a lot of focus around that in previous years. I think the reality that we’re seeing right now in the industry is that today, cloud has matured quite a bit, and they have more compute storage, network database as service analytics as a service IOT, machine learning, security services, all the things that they could even imagine becoming available as part of that. So it’s not just an infrastructure basic infrastructure services.


At the same time, there is, I would say, an explosion of automation tools, both on the CICD side, but also across the board with infrastructure tools, every cloud has its own orchestration, Azure Arm, Cloud Formation. And in addition to that, we have the Ansible and Terraform, Kubernetes itself have many instances of Kubernetes, each cloud -EKS, AKS, GKE –  have its own Kubernetes cloud server. But we also have Open Shift and we have Coop Spray and Rancher and 02:56unclear]  have their own orchestration, so that’s kind of the world in which we’re living. And so when we talk about automation, you have to address the question of where do you want to fit in that stack and what is the specific type of problem that you want to target?


You can not really sustain a position in which we’re trying to cover the full-stack; that’s not sustainable. And when we look at that, we thought that, you know, the core value of Cloudify is really handling distributed and Ultragenyx(?) environments, as I would say, a core value or co-differentiated value but we also fit there well, especially if it goes 03:40unclear] heritage on how to manage services, meaning other orchestration. So the decision and that’s kind of where Cloudify five is being primarily focused was to really move Cloudify up the stack and value chain. And so instead of, you know, being part of the regular orchestration thing that you’re covering, being the orchestrator of orchestrator, and we’ll talk more about it as we go.


Ilan: Okay. Now, when you say, Nati, the orchestrator of orchestrator, it was a pretty new term, but when you say orchestrator of orchestrator, like can you elaborate more on what does it mean? And more to that point, essentially, what problems is the sending out to solve as an orchestrator of orchestrator? Because I think this is a pretty new term and there’s a lot of confusion still in the market, what it exactly means.


Nati: Okay. So let’s start with the second part, actually of your question of what are the automations that we’re trying to target? So when you look at a typical enterprise or any organization today, you have groups of automation that are, you know, pretty clearly defined. There is the CICD automation, and there’s a bunch of options there, starting from Jenkins and Get Action and Circle CI, et cetera. And we also have Kubernetes clusters and we have infrastructure automation like in the case of Cloud Formation and Azure Arm and Terraform and Ansible. And we also have 05:14unclear] automation. Obviously, if you’re running VMware, you have to visualize stuff and you have a service now is the ITSM. And there’s a lot of homegrown on your own premise environment that has been developed over the years. Right now, all of them are built as silos; meaning that, you know, each one has their own tools, they can manage things on their own, the logging, the management, the way they pass inputs and get outputs, and the way you manage them and monitor them is very different between each of their tools.


And that reality leads to the, you know, increasing number of silos that are how to manage and orchestrate and develop. So the means of the orchestrator of orchestrator, really looking to, one, accept the fact that there’s not going to be one language, one template to rule them all. There’s going to be a lot of those tools and each has its own language and its own template, and we’re not going to replace that and no one is going to replace it, not just us. And when we think about it in this way, we still want to give the user the experience as if it was one big orchestration; meaning that the log in, the way you stop things, the way you pass inputs, the way you create relationships between components, the way you troubleshoot things when they are broken, get broken, all those things needs to be, or at least look more consistent.


And that’s kind of the target. It’s a very ambitious target on how to normalize the management of those tools. So to continue the answer, basically trying to not change the fact that we have many domains and many automation domains, but we’re trying to reduce the differences and make the user experience of using those tools be very much consistent and look as if it is still one big orchestration from a user experience perspective, from a management perspective. And there is a lot of details that we’ll cover on how to get to that point and the main idea is not, we’ll talk about, you know, the concept of decentralized orchestration, so at the same time, we want to have that consistent experience, but we don’t want to be the bottom line. We don’t want everyone to now go through that centralized if you’d like top-level orchestration and that will change the way you use any of those tools because most developers would want to change the way they’re using Terraform or the way they’re using Ansible. So it’s kind of a layer that is an overlay and not something that replaces the tools that you’re already using or that replaces the way you’re using those tools and that’s an important clarification. I’ll touch on that later on.


Ilan: Excellent. And yeah, I think that’s spot on because that’s also the feedback that I have with the different users and different from all sorts of organizations, especially the large ones. They already have a lot of stuff that’s automated. They have business teams that are like sort of a quote-unquote, married, or they really like their specific tools and they don’t want other tools like thrown down upon them and forced upon them. So a lot of them have come to this realization and the same with multi-cloud, by the way, that this is like the reality that’s here to stay and they’re sort of looking for a way around that. And I think that’s what Cloudify is sending out to do, and in a really like consistent manner so that definitely makes sense. As a continuation of that, I’ll shoot this one over to Alex. Can you discuss, Alex, some of the new features specific to this Cloudify 5.1 release, and how they sort of fit in with some of the things that Nati mentioned?


Alex: Yeah, sure. So maybe I’ll just add up to what Nati said earlier, just from another direction and then I’ll explain how the new features actually fit in. So, basically, for ages, we have seen that most R&Ds and the departments actually have kind of like a metrics architecture. So you had a team that was responsible for application, a team that’s responsible for infrastructure. Yes, we do have now kind of tools that provide automation on top of it, but you still have those dependencies that one thing that depends on another. And that’s kind of when one team waits for another team to actually provide infrastructure or an application layer so that has a direct impact on the velocity, how the organization can deliver the final result as soon as possible. And even if we have that automation, we still have these dependencies that affect it.


So the idea of our current approach and the new feature is actually kind of like erasing those lines and they provide the streamline that will be able to support kind of like providing a value smoothly without, on one hand, have those restrictions, but then don’t have those dependencies that usually organization have to wait until they will be provisioned. And so, first of all, let’s go straight to the features and how do we support it? So, first of all, once we’ll be talking about the environment as a service, right? So let’s understand in the beginning what it is kind of a hybrid of 11:22unclear]  When we are talking about the environment as a service, usually we have infrastructure there, we have an application layer on the infrastructure there, as well not to mention before we have the Cloud Formation for AWS, Azure Arm for Azure, we have the GCP tooling, we have Terraform as well.


On the application side, we have Ansible, we have Kubernetes, we have many others too, as well. And so, basically, we have all this kind of different orchestration tool that do the job and the one now we have a few challenges, right? So, first of all, we do want to provide kind of flow that combines all together, but also sometimes we have shared resources and shared infrastructures and shared applications. And so in order not to repeat yourself and the kind of be able to use, first of all, we introduced the new feature called generic service component. And that actually, you can then define, for example, how your infrastructure looks like, and then just reuse it over and over again. Or in cases like you have shared components, like Kubernetes and you want to apply on top of it different helm charts or different applications, you can actually an easier manner just to use the same infrastructure, the same component, and just use it as a tool that you were deploying.


The second one. So we do have now different service component components that we are providing out of the box to support the Kubernetes 13:22unclear] all different types; Azure Arm, Cloud Formation, Terraform. So that fits very well in the orchestrator of orchestration that if you already have anything. So for example, if you have a cloud formation template and that doesn’t work, you don’t have to repeat yourself and to recreate in Cloudify the same thing, you can just elaborate and delegated to this resource, and it will provide with the full visibility. So kind of like you can select your own tools you wish to deliver the final result.


The other one. So once we have components and share components, and so that actually straight, it comes very naturally. So everything that they’re concerned about the dependencies, right? So you do want to understand the relations between components and resources that deploy on top of them and you to for some reason, right? For example, if you’re using a Kubernetes cluster and deploying an application on top of it if you’re a moving Kubernetes cluster, you do want to understand what it is impacted on, right? So you do want to notify the user, for example, Hey, you have applications that are deploying, are you sure you want to deploy it?


If you are upgrading your infrastructure, and it has a nip on the resources that are deployed on top of it so there is a kind of understanding are those sources that are impacted, are those resources have to be re-installed, and what kind of effect it causes. One more thing is, so we do improve our CICB integrations and we do understand that most of our clients and most of the industry are moving to managing their infrastructure as a code. And so what we provided, we provided a different amount of native integration, either it’s with the Jenkins, through Jenkins plugin, a circle CI with a circle CI arm, and our own Docker container that enables you to emphasize the connection to Cloudify manager or you’d have actions. So when you have a native integration with all those CICD tools, that actually the idea is if you’re looking at the industry, how they’re using CICD tools, so more and more companies have a need to create an environment as part of them.


So for example, during the checkout or during the pushing code into master, many companies have policies that they’re provisioning environment, they’re deploying their application on top of it to run integration tests, system tests. So this kind of capability can be done very easily through those plugins. And the idea, again, you can do it with the different options. You can create scripting and trigger each silo by itself, but again, scripting or not, they will do the job, but they don’t have this capability and all those use cases to actually manage the environment as a service. So in this construction, what would happen actually that the delegation will go to Cloudify to ensure that the environment is set up properly as possible. And once you finished there, for example, running integration tests or system test, we can tear down the environment as well. And the biggest improvement, not the biggest improvement, but we had a lot of improvements regarding our service catalog that we actually rebuilt from scratch that now have all kinds of blueprints coming out of the box that you can look at them to serve your need, either at CWS, Azure, 17:58unclear] Kubernetes, Terraform whatever you need. So we have a dedicated catalog with different level of examples.


Ilan: Excellent. Yeah, I think that’s great, also a segue to what we’ll discuss right now, which is use cases. But I think especially like the service components and some of those new abilities to do to support multiple Kubernetes flavors, clusters, like very, very easily is going to tie into some of these use cases, which we’re going to discuss. So Nati, I want to send over this to you and actually how some of these features that Alex discussed, how are they related to like directly to use cases which Cloudify is enabling, and really what’s the power of using Cloudify in these specific use cases which we’re seeing now more and more and how Cloudify can really like gain business value for whoever is using it. Can you touch upon that Nati?


Nati: Sure. So one of the things that we’ve done in this release is we kind of extended substantially our VMware environment or VMware support in a couple of areas. One of them is obviously the vSphere plugin, but also 19:19unclear] because one of the things that we’re seeing is that when we look in that the ability of the organization to really adopt multi-cloud and move to cloud-native, the challenge that they have is on their on-prem environment. And one of the realizations is that they’re not going to be able to move away from the on-prem environment. So I mean, if there was a, you know, an idea that they could have a three-year journey and kill everything that they have and move to a public cloud, that’s probably, I think many organizations realizing that it’s here to stay and for many years somewhat.


So the key point here is how to turn this into the existing on-prem environment, more consistent with the way you manage things in the public cloud. The way you manage things in the public cloud, do you have everything is coded and everything is a clean and nice API that can be very easily automated, but that’s not really the case with the on-prem environments. So the first case for us is plugging into your on-prem environment, which is usually VMware and then adding the modeling language, which will allow you to extract all that complexity and make it something that you can really automate very easily through a CICD pipeline. That’s what we mean by modernizing the on-prem environment so that the on-prem environment could be managed as code in a way that is very consistent with the way we manage things in the public cloud. That’s I would say, use case number one.


The other use case that we’re seeing is what I call a sandbox environment for using those cloud services. By that, what I mean is that, again, the way in which developers are using tools today is not something that can scale within larger organization, because it cannot give them, you know, keys to Amazon AWS  to Azure or whatever environment that they’re using and expect them to, you know, do whatever they want. Because they will probably, you know, put the data in the wrong region or open the wrong ports or something of that sort. You want to create in one way, a more controlled environment, but not affect the agility of those users.


On the use cases that we’re starting to see being built by those organizations is another way in which they create a sandbox environment. By sandbox, it’s, you still get credentials to Amazon AWS but the restrictions that you have and the authorization that you have for that environment is very accustomed to what you should have access to and what their organization policies are and how that fits into that. So that fits very well. Also with a catalog service where you can create a click a button, and then it will call the right cloud and create the right environments for you and at that point, you have a self-service experience and you don’t have to wait for an IT guy to actually do all those things for you. And that’s, I would say, a fairly basic use case, but a very popular one.


The next one is the environment use case. So again, if we think about the value of cloud is really to take repeatable cases and automate them. And for every organization, the class of application is much smaller than the number of instances of those applications. Because for example, we have risk application or risk management application, analytics application, or HPC application or web application or mobile applications. The way we’re going to structure the environment for those applications is probably going to be very much the same across all instances. But we’re going to have many instances for development, for the use case A or use case B, for customer A, for customer B, and that’s something that we don’t want to duplicate too much. And in that case environment, as a service really provide us a templatized way to create a common infrastructure using EKS or Kubernetes cluster, or one way or the other, the database as a service or the network configuration that is associated with that and other services that need to be attached to that, that becomes my  HBC environment, that becomes my web application environment. And I want to be able to click on a single click and create that entry.


We also have Devin production environment. So right now, if you look at a lot of the CICD, we have different pipelines for development, different pipelines for production. We want to normalize that for even cost efficiency. We want to be able to use you know, mini cube for my test environment, my dev-test environment, but the EKS from a production environment, obviously EKS will cost more than what I could do with a mini cube in a single, let’s say, controlled environment. Similarly with the database if I’m running a development environment and I just want to do API testing, I can run a local Postgres or a single 24:21unclear]  Postgres(?), but when I’m running it in production, I want to use RDS as an example. I don’t want the environment to look very different from the developer use case. The developer needs to create an environment and this is the way it’s creating the environment would be very much consistent and all the idea of how I optimize the costs around that will be obstructed from the user and that’s the depth production optimization environments.


We have other use cases which is multi-cluster Kubernetes. So many organization expected to have, according to the CNCF survey, more than one organize cluster and the organization. And we want to obstruct the location, the physical location, that type of Kubernetes cluster, whether it’s in the public cloud or managed service or local Kubernetes, or mini cube, those decisions need to be more obstructive to the user. And the idea is to be able to give the user a place in which they can say, I want to deploy this Kubernetes and this needs to go to environment A, and it will know behind the scenes of the environment A there is a development environment, and it needs to be running open shift, and if it’s a production, it would run on EKS. But again, the CICD experience or the developer experience would be very much obstructed from that.


Deploy to many, we have a case that is becoming very popular, especially in edge use cases where you have multiple regions and you have thousands of instances of Kubernetes cluster that are running on the edge devices as an example. In that case, you want to be able to have a single service, let’s say, a security service, or a network service, then you want to be able to deploy across all those regions. Even then you want to do it in an orderly fashion. You want to be able to deploy it on certain edge devices, see that it’s all running well, and it behaves as expected, and only then expand it to the rest of the regions and sites.


And so that’s in a nutshell, the type of use cases that we’re seeing. I would say that in the order of I would say popularity, we definitely see the modernization of an on-prem environment becoming very popular because many organizations are now realizing that they cannot get rid of their on-prem environment, that they have to do something with it and most of the existing tool doesn’t do anything with that and that’s where environmental service spring cause a lot of that. And also the multi-cluster Kubernetes is becoming also a reality that many organizations are trying to address and find a solution to. And in which case the edge is, cause as I mentioned, which is deployed to many, becoming a sub-use case of the multi-cluster Kubernetes.


Ilan: Excellent. So yeah, I wanna delve into a little bit more on that environment as a service use case. But just before that, I just want to add two quick notes from my end, you know, from like the field speaking to customers and people in the field, just to add on that. The private cloud use case, we’re definitely seeing an uptick in that. And like you said, I don’t want to throw too much shade at other companies, but some of the new versions that they’re throwing are they’re requiring like fresh install and they’re not declarative like Cloudify, which is a huge bonus the way the Cloudify models these services. These users basically like are not confronted with lots of complex configurations and service catalogs that they’re nowhere close to being able to utilize them. Like they’re already doing then some of the business units in AWS or Azure or GCP. So absolutely agree on that part.


And the second part I just wanted to add is that Kubernetes in the multi-cloud, we’re also seeing more and more of that. Like people already coming to us, they’re already using like one or two clouds already, and they already know that they want to use another cloud. So they’re looking for that ability to be able to deploy like in much the same way, you know, including the policy for how their developers are working and not opening up to a headache in a year or two. Also if we can expand on that environment as a service and how does it fit into all this and into these use cases, if we can just go a little bit deeper into that, Nati, that would be great.


Nati: Yeah. So again, I think Alex touched on that, what do we want from the environment as a service and how does it fit into what we described earlier as the orchestrator of orchestrator? At the end of the day, we want to be able to use components of the resources of the cloud. And all we’re saying is that the resources are not just VM and low-level resources like VPC and things of that. The resources are becoming more high level. Meaning, I want to use a database as a service alongside the Kubernetes cluster as a service, and this is how they need to be related and connected for the development environment and this is how they need to be structured in a production environment. That’s the language in which I want to be able to talk. Right now, most of the tooling doesn’t speak that language.


So the environment as a service is really that top-level orchestrator that allows us to create an environment in the way that I just described. It has a DSL, which I think Alex touch on, which has the language, an additional language that is called a service component in which I can describe an entire cluster as a component, as a resource. And I can describe a database as a service as a resource, just as I would with a low level of resources. And I can create relationships and passing inputs and parameters and execute workflow against all those businesses, but at a much higher level, if you’d like, then I would traditionally do. Once I have that definition that becomes an environment. That environment, again, can be development production. That environment could be a specific application use case. And once they have those pre-canned environments, I have a much better way to govern how the users, how the developers are running an organization are using those because when I templatize an environment, I can control at that point, which regions will be exposed, which cloud would be available those that are already certified.


While at the network configuration, security configuration, that will be part of that. And I do it once for many developers and I don’t have to go and check every pipeline that every developer is now running or changing and at the same time, I don’t have to get in the way of this. They don’t have to come to me to set if everything that they’re doing is done in a much more agile way and in a much obstructed and simplified way for the developers themselves. So hopefully that kind of explains the concept of the environment as a service, let’s say, another layer on top of the orchestrator of orchestrator, where again, just to clarify that part, the orchestrator of orchestrator is a key component in the environment of the service. It’s a key component that allows us to use resources that are already managed by other orchestrators as a resource, within an environment. So we don’t have to reinvent the wheel and we can reuse resources that are already templatized even in this case, by other orchestrators.


Ilan: Yeah. And I think we’ve said that multiple times where we can’t stress it enough for all the listeners out there, and we’re not coming to throw everything out the window. As I say, we understand that a lot of work has been done, whether in Arm, or Cloud Formation, or Terraform or Ansible or whatever script and this is one of the main ideas in the five family of Cloudify. I want to go back to Alex for a second cause one of the features that he touched upon was sort of the enhanced self-service catalog in the UI UX components. So Alex if you can expand on that a little and explain some of the ways that it’s working and what we’ve done there would also be super helpful.


Alex: So where we had lots of UX enhancements and so the first one, the one I’ve touched already, it’s a self-service catalog. So basically it will assist you…first of all, we’re providing, out of the box many examples and depends on the level and the complexity, you can then select the best. The cloud provider would have an example and the technology, and it will easily be able to provide you. So basically, as I mentioned before, we have Amazon AWS, GCP, Azure, and examples with Terraform Cube Spray, OpenStack, VMware, and so whatever technology you need. And the secondary, we do provide all kinds of small fixes, small items for use enhancement. If it’s on the deployment level to understand better way what is it deployed and the most I think important parts is one of the fishers that we are improving in, and currently we are supporting it for Terraform only but we do want to provide for other technologies for our other orchestrators as well is to have the 33:56unclear]  as the first-class citizen. So actually what it provides it, say, for example, if in the blueprint you’re facilitating Terraform template and so basically what it provides is kind of after the deployment, you can actually zoom in and see what exact resources were provisioned for this Terraform. And, of course, we are planning to provide it for other orchestrators as well, like Ansible, Cloud Formation, Azure, and so on.


Ilan: Thanks, Alex. Excellent. An what Alex mentioned this, obviously it’s a podcast, we also are going to have videos on the website and in the documentation showing some of this so you’ll be able to see some of the features that Alex is discussing. I’ll stick with Alex, cause I wanna get down to the nitty-gritty for the ones that want to get like actually started today. So I can tell you from my perspective, obviously, it’s super easy. And when I say from my perspective, this is a non-engineer perspective. It’s super easy to get started with Cloudify, we have a lot of examples, but I’ll hand it over to Alex, if you can outline some of the ways of getting started with Cloudify today, that’s sort of what our recommended best practice or some examples lie and so on.


Alex: So basically the best practice. So the best way to getting started to using Cloudify, first of all, you have two options. Either you go to our website, download the trial version, or to sign up for our ASUS solution. And then, probably the best way is already know what technology you want to use. So for example, if you’re willing to use a public cloud, it’s the best to prepare with the secret to access, say, your account, with the right permissions, you can create a dedicated…my recommendation is to create a dedicated programmatic user inside the account with the keys. So once you’ll be done 36:32unclear] remove the user itself. So sign into the application and feel secrets, install the plugins, select the complexity, based on the complexity.


So if you’re not familiar with Cloudify, I would recommend you start with the simple examples, just to understand what the look and feel, look like and run it. See what is provisioned, just to check out based on how the deployments is graded to go through the UI a bit, and then maybe jump into the console itself of the cloud provider and see. If you’re using a private cloud, probably a SUS solution won’t be the best case for you so I would recommend to download our trial version and just to store it on-prem.


Nati: So when you download it, Cloudify is open source and you have the community edition and you have the premium edition. You can download it straight to your desktop. It’s a single container. If you have Docker installed on your desktop, that’s all you need. And you can get started by just following the steps on the new ‘getting started’ guide that Alex was referring to that gives you kind of a bunch of examples that shows how you can run and use Cloudify and the concept that I mentioned, the orchestrator of orchestrator. It has Cloud Formation and Terraform, Ansible, all those options that I think we’ve touched on there earlier as part of the ‘getting started’ example that you could experience.


Alex, I would also mention the new CICD example. That is not part of the ‘getting started’ but we also simplified quite a bit the way to trigger workload and deployments directly through Jenkins or through gate actions in a Git Hub style. Meaning that you could just change a blueprint and it will trigger a workflow to do the continuous update from that blueprint on the designated environments. So we have references to that as a set of best practices. And so that’s another set of examples that are now ready with each of the CICD integration, you’ll find a reference to those examples.


Alex: Yes, definitely. For those who are willing to provision environment as a service, as part of CICD, I would highly recommend to look into it. And again, either you’re using via Jenkins; also for Jenkins, we have an official plugin that you can download straight from the market, the Jenkins marketplace, or if using Circle CI or Git Hub actions, we have official releases for this as well. So we do have examples, I encourage you to look into them as the value it provides you exactly about to smooth those dependencies between infrastructure and application layers that are tremendous enablements to increase the velocity in go-to-market.


Ilan: Okay. All right. And I’ll just add to that, obviously, Cloudify out of the box, this is getting started as supporting all the major public and private cloud providers.Anyone has access to an AWS and Azure or Google cloud or vSphere, OpenStack, can easily get started with these. And I do want to commend Alex and the R and D team. Really, they made it super easy. Like I said, I can tell you guys, I personally set up a GKE cluster yesterday, super easily, like literally a couple of clicks of a button. So if nukes like me can do it, everyone can do it. And yeah, I think, we’re just to close off, this is for both of you guys, I don’t know who wants to take it first, but what’s next, what’s coming down the line in terms of like Cloudify 5.2, which I know the R and D team is already hard at work at, and then sort of additional things so that we have to look forward to?


Nati: Maybe I’ll start and Alex, you could chime in. So, basically I think we’ve covered the journey of Cloudify five towards this you know, upper level orchestration, what we call service composition and the DSL and all the things that really enables us to manage other orchestration in a native way. The Cloudify 5.2, which is coming down the road, would be basically a continuation of that journey but we’ll have more focus on Kubernetes, a lot of focus on Kubernetes clusters management in that way and multi-cluster Kubernetes so that we can manage things across thousands potentially, especially in an edge use case. And imagine the use case in which we have, you know, certain Kubernetes clusters running in your data center and your public cloud, but also connected to many other Kubernetes clusters.


That’s the type of challenges we’re trying to achieve. In order to obstruct that obviously, we can not expect that the user would need to point the deployment to each of those Kubernetes cluster so we’re adding a new service that is, I think, has a lot of potential to change the way we think about orchestration in general which we call placement policy. In which case, instead of pointing to a particular cloud or to a particular Kubernetes cluster or whatever, you’re doing it in a much more obstructed way in which you’re basically saying, I want to deploy it in the East coast. I want to deploy it on environment, on the development environment, I want to deploy it in production environment. And the placement policy would find the right environments that feeds this to your requests based on cost, based on availability, based on, obviously, the geo location that you’ve probably mentioned and based on other preferences.


So again, it will be a first release of that feature that you service but you could imagine that there is a lot of potential here to really think differently on how we do the matchmaking between the actual resources, infrastructural resources, and the workload that needs to run on that. And that I think has a lot of potential and something that will start to release against the 43:07unclear]  release. And the other part, I think Alex, I’ll let you kind of expand on that. We’re trying to make Cloudify itself more and more cloud native and SUS ready. And there’s a bunch of features that are coming down the road to make, for example, continuous update on Cloudify itself. We know that a lot of the customers of Cloudify has been struggling with continuous update type of mode so by moving to that cloud native architecture, we’re hoping to make a lot of that kind of go away. And so maybe Alex, you could cover that part of the future.


Alex: Yeah, sure. So the idea is first of all, to move more to cloud native solution and the part of it is it sort of, so first of all, serve our SAS solution as well, right? So we can then continue to deploy our own solution with our solution as a SAS service so we are really eating our own dog food, I can say. And so what it means, so first of all going cloud native, we’ll be providing an official health chart that we’ll be able to a provision our environment on Kubernetes. As some of you already know, we do have containers who have only one containers already 44:34unclear]  containers provide manager at the base and the cube, but that now we’ll be managing it through the helm charts officially.


Secondary, basically we’ll have lots of architecture improvements and decision, and that will cause both to have a direct input or impact on the performance and usability. It means moving our manager to be 45:07unclear]  It means that we can actually upgrade our application just by replacing the container with the newer version so it will be very easy. So for those who managed or tried to do an upgrade for our system, for now the processes, you have to take a snapshot of the system and then restore the new one and going forward, that will be something very slowly, just replace the container with the newer version and all the rest, you get for free. And so this one, I think, will ease on our customers a lot who wish to upgrade to the newer version.

So we do have a side project or as Nati like to call it a secret project. So up to this point, Cloudify orchestration tools, what they assist you is actually provisioning your environment as a code, infrastructure as a code. So it’s like one way you define what you have and then you would get the final result. For some of our customers and for a small businesses, usually, it’s not the way. And usually the starting point is actually to go straight to cloud provider console and creating the resources straight there. And it makes, in some point, those companies realize that they do want to provide the infrastructure as a code or environment as a code. And the work is very tedious. It’s actually to go through all the resources that are configured in a cloud providers’ console and just copy/paste and knowing the right symptoms based on the 47:08 unclear] you are using what you have to.


So we kind of want to turn it around and actually provide a very easy way that if you’ll provide us with the resources on the cloud provider that you’re interested in, we can easily create a blueprint out of those resources. So for example, if you have installed a Kubernetes cluster, or virtual machines on the AWS with the route tables and the BCs and security groups and the elastic APS and the EBS volumes, you can easily just to point us to those ideal resources and then all the rest of work we will do with with automation. So all the resources, all the relations with all the dependencies. Once you you have this blueprint and you’ll be able to upload it to the composer, so you actually will see that the entire topology, how they’re connected. So something that isn’t our customers on-boarding process to Cloudify, and I hope I haven’t said too much.


Ilan: No, but that sounds super exciting and I obviously know more about it, but yeah, I think it’s going to make a big dent in the ability for users to quickly get on-boarded with the different resources and automate them. So obviously I just add to those couple of points. We talked a lot about cloud native and the various things. Cloudify, we’re actually going to have a virtual booth at the Coup Con (?) which is the 17th through the 20th if I if I remember correctly. So we’ll be there. We’ll be happy to talk about everything we discussed here you know, some more of a placement policies and Kubernetes specific things as Nati mentioned. We’re also going to have a couple of really exciting announcements that I can’t expand on in the upcoming weeks, one in Cuccaro. And again, that very, very related to Kubernetes and how it relates to the edge. So stay tuned for those in the next coming weeks. And that’s a wrap… anything?


Nati: I think you wrapped it up nicely and maybe for those that will be listening, there is a lot of content behind the release. I really encourage you to look at the release notes and some of the references that we’ll be posting along the podcast so by all means, take a look into that. Obviously we didn’t cover everything here. We tried to give you the feeling of what’s out there, why we’ve done the things, the way we’ve done it, and where we heading for it. So hopefully you got that part.


Ilan: Excellent. Yeah. So we’ll add those in the links in the media data and Jonny, you want to take us home?


Jonny: Happy to. Thanks so much guys for an incredible session as usual. Thanks Nati. Thanks Ilan. Thanks Alex. Some amazing things happening at Cloudify over the next few weeks and month, I can’t wait. Okay. Some key takeaway points from this session. I think we’ll be Coup Con. We’re going to be at Kubecon this week, 17th through the 20th, we’re going to have a digital booth. We’re going to have exclusive assets. We’re going to be there aso please come over and you know, consult with some Cloudify specialists. Also if you want to snag yourself a free pass to Coup Con, then simply sign up to our webinar. The webinar itself isn’t happening until November the 26th, I believe. But if you register ASAP, you have a chance of snagging yourself a free pass to Kubecon, that is awesome. As usual as the guys said, all associated assets for this specific podcast will be at www.cloudify.co/podcast. So in the meantime, stay healthy, stay safe, and we will catch you next time.


Learn more about Hybrid Cloud Automation with Cloudify.


    Leave a Reply

    Your email address will not be published.

    Back to top