Watch on demand: Dealing With Automation Tool Overload!
Watch the webinar:
See the presentation >>>
Welcome to a very special webinar hosted by Cloudify dealing specifically with how to overcome the automation tool overload. Which I am sure is a problem that has been thrust upon us all in this day and age. First of all let’s discuss just some general housekeeping rules; number one, very important to tell you guys this webinar is being recorded. Second point, you will receive a recording of this webinar and the deck that is being presented to you, in an email upon conclusion. Point three, we estimate this webinar will take about 50 minutes give or take and an important point there will be a live q&a at the end. So you can see at the very bottom of your zoom screen you will see a little q&a so feel free to ask some questions there, and if you are not up to publicly sharing please send me a direct message and I will of course ask on behalf of you. As I said this webinar is dealing with automation tool overload, so to formally kick us off is our CTO Nati Shalom. Handing over to you Nati.
Thanks very much Jonny. So what we’ll be covering in this presentation is basically two parts. One part is going to be a little more high level going through the market trends and what is the specific challenge when we talk about automation overload. And then we will slowly drill down into the Cloudify approach to solve that problem and what are the specific things that we are doing. And then Ofer, our Product manager will guide us through the 5.1 release and 5.2 the upcoming release and what are the features covering this high level vision, which we call ‘Environment as a Service’, which again we will speak more about in just one slide.
So when we are talking about the market, one of the things that I find interesting is that if we kind of reverse everything back into 2019 and even 2018 everyone was looking at the cloud journey in a certain way. Which is, we are going to move everything in the public cloud in two or three years, that’s what we heard across the board from many enterprises, and therefore we don’t have a budget for buying an open environment because we are going to kill it anyway and that’s our strategy. 2020 came in and there is an even higher pressure due to COVID to move even faster and everyone realizes when they look into what they accomplished in 2019 that they haven’t really achieved much, during that transition. Part of the reason is that there are many silos and we will see that in a second, there are many more challenges that i’m not going to go through right now but the output of that is that on-prem is here to stay. It’s here to stay for a while, at least according to one of the enterprise customers that we’ve been talking to, it’s a journey of ten to fifteen years until they can move everything to the public cloud. And therefore the question becomes, what do you do with that, now that it’s going to stay for such a long time you have to put a budget around that, you have to deal with that. That is what I am going to talk about right now, how do we actually approach that.
So this is a picture that I think everyone knows exists, but until you put all the logos in one place, it’s not that clear. And once you put the logos, it becomes very clear that there is a huge mess there. Just look at the CI/CD box here, how many tools are out there, it’s not just Jenkins. There are many tools popping up almost on a daily basis. Also the cloud providers themselves are now starting to bring their own pipelines, in the case of AWS and Azure DevOps and that sort. So what is clear is that even in the CI/CD model you are going to have one, two potentially, and not going to talk too much about that, even though I think that is the reality everyone is facing today. In the Kubernetes space we are seeing a similar trend, most because of private public scenarios using open shift in private and potentially GKE in public and on the infrastructure side we have a bunch of tools there, obviously Ansible and Terraform being the more notable one. We also have other orchestrations that are native to the cloud like Cloud Formation and Azure Arm being the dominant in each of the cloud services. This is not the only list. There are a lot of other automation tools that are in the data center. Obviously there are homegrown automation tools, there is Service Now, the IPSM. There is VMware with their own automation tools and different products, and VMware itself, VRO and VRA, they put everything in a kind of vRealize umbrella, but there are many different types of automation tools, each is doing one piece of the meal, including N60. So that is the reality but, one thing that is missing here. How do I put all of the things together? No one really knows, no one really deals with that. That is where Cloudify fits in.
(Next slide) This is the kind of approach we have taken to do that. I am going to start with very shortly because I want to show it live during the session when Ofer will go over that rather than talk about it in slides. But at a very high level think about the following. The first pillar you see here is called breaking silos, with orchestrator of orchestrators. So you can also understand by the name what I mean by that. But the first pillar is really the level at which we are talking and how we interface with all those automation tools and how we create a common ground to work with them. I’ll talk in much more detail around that there are a lot of important aspects around that. But you can start to see how we positioned Cloudify to fit in that box. Kind of a layer on top of the orchestrations layers. That’s why we call it an orchestrator of orchestrators. It’s important to know that even though it sits on top, it’s not the single point of access for everything. So we are not pitching yet another orchestrator that you have to go through and a gateway to all the tools you want to use. And you are still going to use Kubernetes, Ansible and Terraform and other things. It’s just going to play as an overlay to integrate the things but you can still use each of the tools individually without going through that layer. Once we get to that point, where we can now speak in bigger blocks. When I’m talking about bigger blocks, I’m talking about VPC, VM, Switches, security groups, all those types of things that we are custom to deal with when we are writing our infrastructure templates, to another layer. Which is, one, database as a service, I want it to be production ready, I want it to be a single instance for development. Kubernetes cluster, I want a mini cube version of the Kubernetes cluster because I am running Dev. I want an Open Shift version of that or this size of Kubernetes or etc. Similarly I can talk about machine learning components and analytics components. So I am going to work on much bigger blocks to actually create the end environment. Which we call, Eenvironment as a Sservice. So the first pillar is really an enabler. It’s not a means to an end. It’s an enabler, it’s a step towards the end game which is an environment as a service. It’s the thing that allows us to take an exciting template. So if Azure already has a database as a service template, I am going to use it. I don’t have to write it in Ccloudify or Terraform or other things if it already exists, I am going to use it as is. I am just going to plug it in. That type of ability to work across those different languages will really enable me to build environments end to end without having to go to each layer through the continuous transformation of templates that we’ve been accustomed to. That allows me to create that Eenvironment as a Sservice which is really a way to layer those building blocks together towards an end game. An end game could be, and we will talk about it in the next slide.
So these are the typical use cases that we’re seeing today in the market. That I think is more clear to identity. All of this by the way is based on real customers and real use cases. It is not just kind of an imaginary discussion. So we talked about on-prem environments. So in an on-prem environment, my challenge is really how do I make my own on-pre environments modernized and how do I make it something that I can access as code, in the same way or a consistent way to the way I am accessing the cloud. Having a cloud environment is really the challenge and in that type of environment is again how do i create consistent not trying to copy or migrate things from one cloud to the other that use case doesn’t really exist. But how do I leverage templates that have already been automated initially in the cloud like Azure ARMArm and others. And we will talk about that and Ofer will actually show that. But also how do we create pre-certified templates for my own environments and different operating environments, and I mentioned HPC, analytics,web applications, mobile application, you could name a few of them and that could be the environments. The most common one is production, so everyone has that, it is the kind of common denominators across all these environments. I want a very low cost development environment and optimized for agility and I want the same environment set for high availability in clustering for my production environment. How do I now create a common interface between them so that a developer who is interfacing with that would need to know what are the details that comprise the dev or production environment. He needs just access to one of those environments and how they are built and how they are optimized in something that is not necessarily something he needs to be exposed to or worry about. So that is the dev production environment. The on demand environment is something new that we are starting to see. It’s really the layer before the CI/CD pipeline, think about it in that way. You are the developer within an organization, I don’t want to give you kind of a credential to Azure or AWS or to google and then let you do everything. So I need to create some sort of a send box to run it. Once I’m creating that send box you’re free to go and do whatever you want with whatever tool you want so it’s not something that sits between you and your DevOps experience. But its something that leaves kind of a right before but that process in itself needs to be automated and templatized. So I need to have a catalog that says, I’m a DevOps user within my organization that is writing machine learning application therefore I need all the stack that is related to machine learning to be enabled for me, all the right availabilities on it needs to be available in the data services that need to be involved needs to enabled, the types of VMs that need to be part of that need to be enabled. I don’t necessarily want that user to have access to, if I’m running in the US to have access to Asia for example, and potentially run things there. Similarly with edge, we look at that as location based environments and that puts us in that use case where we have many instances that are distributed amongst the globe and I want to manage that in that way.
(Next slide 12:14) So once we get the ‘Environment as a Service’, I think that realization that we need that type of environment is becoming very clear to a lot of organizations, and they are now putting a team around that. You can see the Gartner review, at the top here, that what they are starting to realize is that doing that integration that i mentioned isn’t so simple. It’s not just putting people there and some scripts to tie them together, it’s a much more complex process. That’s kinda where we see Cloudify fits in and that at a very high level, kind of where we fit in the stack, kind of the layer that enables DevOps teams that are relatively new in many organizations and empower them, meaning we’ve already done some integration with all the tools that we need to integrate with so we can save them that work. And we only figured out how to integrate to them so we figured that out and we have a custom profile so we have a lot of tools to enable those people so that’s where our target users are and where the target environments are.
(next slide 13:16) So I am going to spend a few minutes on that and then hand you it over to Ofer. The important thing to note here, and I am sorry that it’s not that readable so I am going to talk about it. The important thing when we talk about integrating with services the simplest way is to run the Terraform template in a send a forget model. But things become much trickier when something goes wrong. Let’s say that you run it and something doesn’t work, what do you do with that? So obviously it’s not that simple, you need something that will enable you to one consistently define how to run Terraform how to run Ansible how to run Kubernetes, even if each one of those tools have a different way to install to run to pass input to get outputs to troubleshoot. All those things are very different between each of the tools. So the first thing that we want to have is some sort of common interface that will allow me to call the templates at each of the tools and then the actual tool to drive that template but in a consistent way. So that is something that we call, service component. Once we layer that out and we want to start using that as a building block we need to start to define relationship between those components, we need to have something that we call shared components, which will allow me to point to a database for example and once I do that, I want to know who is pointing to that. So when someone wants to remove an environment, I don’t want him to remove a service that others are dependent on and maybe affected out of that operation. So I want to be able to detect that beforehand and kind of deal with that. So there are a lot of details that need to be covered. To do that we actually developed a new DSL, a new domain specific language, on top of our TOSCAToTOSCA sca base DSL that allows you to model things in that way and that’s a very key feature in Cloudify 5 journey that you can see across the board, and when Ofer will present it, ……..
(15:31) I will hand over to Ofer to really show you how that works to really go over the details of 5.1, Ofer the floor is yours.
(Ofer) Thank you Nati, so Hi everyone my name is Ofer and I am the product manager of Cloudify, I am going back for a second to that slide that Nati discussed, because in the next twenty or twenty five minutes what i will try and do is walk you through what we’ve done with Cloudify 5.1 to help you get one step closer to reaching those goals to removing those automation silos to taking your orchestration to the next step with Cloudify abilities and introducing environment as a service as a concept into your organization. I will also draw your attention to the common stock that is also important which is something that we need across all of these environments. So it doesn’t really matter whether you a working with on premise environments, or hybrid user whether you want to run your on demand, in any of these cases you would still need all these common stock items to make sure that whatever product you are using whatever orchestrator you are using ties to your ecosystem and provides you with the abilities you need while keeping compliance and security in place for everything you do.
So I am going to start with the stuff that relates to the EaaS, Environment as a Service.
As you have gone through these items, as Nati has been describing the different environments, we’ve noticed two things that are important regardless of the environment you are using. The first one is the need/ability to abstract the environments, to abstract the different components. And this is not really about abstraction as an objective. We are not having towards that to become abstract, what we are trying to do is we are trying to simplify and to make the solution as maintainable as possible. The second part, the second common theme is the ability to leverage the strengths of your existing tools and reuse them and not just build everything from scratch with every new product that you are going to introduce in your system. So to give you all these new capabilities while keeping the options open and keeping everything kind of against locking what we have done with regards to everything that is on-prem environment and hybrid cloud environment we’ve extended our support for all of our legacy automation tools. Meaning we have introduced on top of our vSupport, we’ve introduced support for vSeries 7, which was recently released for VMware. We’ve extended our support for vCloud director again to the new releases and to complete that package on top of VRO that we already had before that and the old Vsphere stuff we’ve introduced support for the networking capabilities through NSX and what this allows us to do is to basically compose everything that you need inside VMware from the networking layer and from the infrastructure layer and combing these together. So if you are interested in setting up a whole service a whole system you don’t need to go through separate flows anymore. You can go through Cloudify and orchestrate your network, orchestrate your VM, set up everything, set up your IPs and correlate everything and all from one blueprint in one location instead of jumping between different VMware managers and stuff like that. Of course we kept our OpenStack updated as well as all of our other legacy plugins and options ranging from fabric for SSH restful API that we have and the rest of our kind of legacy plugins. For supporting your on demand environments, hybrid clouds as well as some of the new clouds we kept our support for all the orchestration tools and extended that considerably. With 5.1 we have full support for AWS cloud formation for Azure Arm, Anisble was vastly extended the plugin for ansible and we’ve added support for AWX so now we can support tower as well with Cloudify and of course the Terraform plugin was extended and I’ll be talking about that in detail. But what all of these have done together is allow us to leverage whatever mechanism you already have in place in your organization, and just adopt that combine that with other tools you may wish to do and get one plus one equals three when you use cloudify by taking the different tools and making them inter operate with each other in a much better way. By taking, for example, outputs of one tool pushing that as inputs to another tool thus creating one consistent flow whenever you are working with the system. As always we continue to push on our DSL and improve that to create modeling for both services and environments as reusable and shared resources that you can use inside your organization and save you a lot of code rewrite with every new item that you are building, by simply using classes and instances of templates that you have already created.
(21:30) One of the key challenges when you introduce a new IT tool, whatever it is, is how do I make sure that that introduction is a) seamless and b) does not cost anything for my team, or DevOps teams are busy I don’t want to do anything that a developer would be extremely unhappy with each new tool that I am pushing in, and I would like everything to be flawless. To do that we’ve introduced Cloudify CI/CD integration, we’ve talked about that with Cloudify 5.05 and we’ve vastly extended that with Cloudify 5.1. The use case or the idea is why not run Cloudify directly from your pipeline. If I have a pipeline, if I am pushing new code, updating to my code repo I would like automatically that a new system will be set up, automatic tasks will be executed on that set up and if all goes well then turn everything down and I will go back to my next push. Cloudify allows you to do just that, we’ve integrated with Jenkins CircleCI, git actions and more and we allow you to do just that, trigger by any code push, trigger by any user request, we allow out of the box for all the custom tools so you can run, not just run Cloudify steps and we allow you to run any Cloudify steps, setting up your blueprints, running deployments, turning things down etc. we can allow you to rep Terraform templates, Ansible playbooks, Cloud Formation, what be it everything that Cloudify supports can be triggered directly from you pipeline in a native way. So we are using the native CI/CD interface which means zero learning curve for developers and allowing them to use Cloudify seamlessly without even knowing that Cloudify is part of that. So your DevOps will be happy because everything is to the standards, everything is certified and you are getting the exact systems that you want to get automatically, but from the developers side all they have done is push code and everything was done for them, so it’s perfect. No involvement whatsoever and they are getting everything they want. Again, for removing some of the overhead for our DevOps team and the IT teams, the way we see that, IT teams, DevOps teams they are currently overworked with many products with with many tasks they have with top requirements that they need to adhere to and one of the objectives for Cloudify is removing those bottle necks making sure that the DevOps teams with the introduction of Cloudify has less work to do other than more work to do because of yet another tool. So with the self service portal what we allow that team to do is to expose certified services as a menu. I am going to walk you through a light demo of that, as a user I can log into the system, I am being tracked for my role and my positions and for that I am getting a catalog or a menu of the services that are certified by the organization for me. I can click and subscribe to these services, get them operational, get them going. I can understand what the status of their set up is, set up time to leave or expiration of those systems so that they will not be used by anyone. I can set up a system that will be turned down within thirty days and obviously everything is to compliance and and to the standard of the organization because those templates were certified by the DevOps team or whoever administered that service. So with that we are removing the DevOps bottleneck, we are no longer the bottleneck of, a) I need you to pop up this service because I’m busy, this takes longer. Do whatever you want as a user, log in, use the options that you have, we can monitor those and you know what, simply pop up whatever service you need, I’m out of that flow and the wait time becomes zero. Obviously to support that we have also extended our white label links so if you are interested in popping up whatever portal you have, you can modify the cloudify UI to whatever makes sense in your organization. Or if you don’t like cloudify and you want to tie that into your workflow management tool or to your own portal that is perfectly ok and we have full integration for that as well.
(26:42) Kubernetes is getting a lot more focus in the last year and of course setting up a Kubernetes cluster is not an easy task. It doesn’t really matter if you are doing that on Bare Metal or if you are going through GKE through google or open shift or what be it. Setting up that cluster requires some knowledge and some time and templatizing it making it optimal and making it certified for organization is yet another option that we provide with Cloudify. So obviously with 5.1 we’ve extended our support and we have complete support for all of these kubernetes flavors and platforms that you see here, Azure, Amazon, Google, Reddit, OpenShift or Bare Metal, we support all of them. If you want to use existing tools or other capabilities like Ansible, Kubespray or any other four months to run those clouds obviously this is also supported and you can no just set them up, configure and maintain, you have the complete day to operation extending the cluster adding more resources, multiply stuff, this is all fully supported through the Cloudify day to operation options. And again for complete support of Helm charts and the ability to integrate that with extended our Helm capabilities through the Helm free plugin which we just saw.
(28:18) So we’ve talked about Kubernetes and supporting all of these platforms giving you the flexibility to run them on whatever platform makes sense to you. But with digital transformation being a gradual process with multiple cross platforms, cross location, cross infrastructure what be it, services that are required. You may have services that span not just across different types of Kubernetes clusters but also across different locations and completely different types of platforms. My database is still running on VMware but I need to combine that with a new application server that is running on Kubernetes and stuff like that. Cloudify gives you the complete flexibility to do all of that. To run a single blueprint, a single template that orchestrates everything across all of these environments be it kubernetes, whatever platform you have, be it legacy, VMware or OpenStack or be it public cloud, it doesn’t really matter, you can combine all of them under a single blueprint and run everything together. Now combine that with your CI/CD capabilities and suddenly you have magic in your organization.
(29:39) We are not forgetting the developers. Writing Cloudify code, Cloudify code is written as a YAML files and as Tosca protocol and to do that we have extended yet again two options. We’ve continued to extend and to build on our composer, the composer is a graphic interface allowing you to use drag and drop to build your blueprints and then to use that for auto generation of our Tosca code. We’ve extended the support to mobile options to components etc. and it is a lot easier to use that option. And then for those of you who are fond of using your IDE to write your code, we’ve introduced an IDE scheme for cloudify that supports suggestions, auto completions and corrections and simplifies the whole process of generating a Cloudify blueprint. We’ve also taken the time to not just other automation tools but also to use them in a way that in many cases extends the capabilities of those tools. And i’ll give just a few examples, the first one is the option to show the Terraform as a topology view. So if you’re using Terraform through Cloudify, that Terraform deployment is going to have a state file and that state includes the information about what was actually deployed. Cloudify allows you to not only view that information but to view that graphically and shows you the objects that were orchestrated through that Terraform step in a graphical way with a full topology of all the objects that were orchestrated as well as the dependencies and the relationships between them. And that is something that is really helpful for an operator not familiar with Terraform, not familiar with the flow but suddenly gaining full visibility into what that step does and that helps them understand if something fails, if something is broken, that gives them full visibility and that’s something that is definitely not available with your terraform out of the box and Cloudify extends that. Of course the custom CI/CD steps that we’ve already talked about and the ability to run Terraform, Ansible, Cloud Formation what be it, steps directly from your CI/CD using Cloudify through our Jenkins, CircleCI, Git actions, plugins and capabilities we are extending those tools options. And probably last but not least and the most interesting in my opinion, is the ability to run Terraform and Ansible and run time as code. Meaning whenever you orchestrate something through Terraform or through Ansible you can now generate an add hack container that Cloudify is popping up for you. You specify in code which version of Terraform, which version of Ansible you would like to have, which is modules would you like to have in that and we are creating an on the fly container for you with run time environment using whatever playbook or template you have, running that and then turning that environment down. That allows you to run these options regardless of different teams running different versions. Different teams require different run time environments, we give you the complete flexibility to support all of them and any of them can get whatever they need without the need to continuously maintain all of these environments, by running those on the fly. So this gives you a lot more value when you push, again Terraform, Cloud Formation, Ansibe what be it, through Cloudify and when you are using them directly. On top of all of that, of course the option to inter operate everything and to make sure that they work together as one flow.
(34:00) Next what I would like to do is, I do believe that quite a few people here that are not familiar with Cloudify and the UI. What I would like to do is take just a couple of minutes and show you a quick demo of what we are talking about and what happens behind the scenes when we talk about Cloudify. So I am going to quickly switch to the Cloudify manager now I am going to log into the Cloudify manager, first as an admin and when I log in the first thing that happens is the system checks, who am I, what is my role, what are my permissions, meaning what actions can I perform in the system and what data what options should I have access to. And this is all done based on my role, obviously connected to your adapt to your user directory and being set with security groups or local administrators it doesn’t really matter, we support both. As an admin, my job is to maintain everything so I get full visibility into, my first page is my dashboard, I get full visibility into where my eployments are stored and these are my different sites different locations, data centers echa of those shows me the deployments that are in that and what their status is. I can drill down, go and check my deployments, and see ok these two are pending so what do I need to do, what do I need to follow up with. As an admin it is also my role to decide which functionality I would like my cloudify managers to support, which plugins do I want on board, which capabilities do I want to grant my users. And these are just examples of the plugins that Cloudify provides out of the box and there are many more that can be generated, the plugin capabilities are completely flexible and available for anyone to develop. And then to simplify things we also extended that with a set of examples of blueprints that can be leveraged, again as an admin I can modify that, i can add more and more items to my catalog. We have simplified VMs or services examples usage of Kubernetes, usage of other orchestration tools etc. etc. And as an admin I can pull more and more catalogs and pull them into my system and then decided whatever I want to certify for my users and what I certify for my users will be available for those users when they log in to their UI and they will not have the full flexibility that I see here. I can determine my sites, set up locations, data centers and do all of my maintenance work with the system such as defining my secrets and the ability to use passwords etc. the ability to define more users, groups roles etc. and backing up the system for maintenance. This is all stuff I can do as an admin. But I skipped a few options which I would like to show you as I will be logging in as a plain user.
(37:30) So now i will be using a different role. And the first thing that you will notice when i log in is that my menu is a lot shorter. As a user, what I have selected to share with as an admin or what I’ve selected to share with that user, is just this simple catalog that I’ve allowed them to take a look at the services that he has and subscribe to that. So I would like to set up this new system, I’m going to give it a name and this is going to be my GCP demo for that matter. I can assign it with my data centers, let’s put it in Boston and there is a set of parameters that my admin has made available for me that I can use, that can be an empty list or a complete full list with default values, depending on whatever my admin has selected. Then I am going to opt out and say I would like to get a service, again I am an end user, I am a developer in an organization that needs that VM for their usage and so, next thing I get is a flow and the system tells me ok, if you want to get that VM on Google what you need to do is first I need to set a network from you. Once I set up that network, and I also need to set up a disk and a SSH key fail so I can access that VM, once I’ve done all of that, I can move on, once I have the network available, I can move on to generate new firewall, generating a subnet and then when I have all of these, only then can I move on to installing my VM. So as a user, I see the flow, I know where the system is, if something gets stuck, all I need to do is click that node. I’m clicking that node, my logs at the bottom filter automatically showing me just the information I need to understand. Here we started the operation, it wasn’t ready yet so there is a retry in a second that is gonna move on and continue to succeed and move on to the next step. So as a user I get all the information I need and if something breaks I can go back to my admin and say, hey this is what happened, and I have all the details that I need, and again, zero learning curve. The system shows me everything, only the data that I need in front of me and I don’t need to learn what the system does. Once everything is complete I have that in my deployment list, I can see all of my deployments, I can tear them down, uninstall, change anything that I want, but I have full control over that and that is me as a user, and again as an admin of course I have all of these capabilities and much more, but this give me the ability to offload all of that work to my users and now its in their hands and easy enough.
(40:42) So with that I am going to move on to, with all of that where are we headed? What’s coming next? And so as we are looking into the near term or the mid term road map we see these three verticals in which we are going to continue investing. The first one if we are talking about Cloudify being able to save you time, to reduce the work to reduce the amount of work you invest in orchestrating your stuff we should do the same for Cloudify and to simplify the installation to simplify the maintenance of Cloudify to provide a simpler upgrade flow which is also is also faster with as little downtime as possible, we are moving Cloudify to a more cloud native deployment. This is one big step in a journey that we’ve started a year ago and with our next upcoming release Cloudify is going to be supported as a set of Kubernetes cluster deployment. This is, like I mentioned, not only going to reduce the time and the extended the simplicity of things to install Cloudify but it will also speed up our upgrade and make it easier. We are also moving as a company to a more agile delivery path. If you have gotten used to getting a Cloudify version every six months, roughly, we are switching to minor releases every six to eight weeks and with the new upgrade flow that is going to be smoother and faster this will allow everyone to get a lot more functionality in a more frequent manner. On the functional side we are going to boost up our environment as a service we were talking about environments for a while now, we have all of those capabilities in the system, we are going to extend for Morpheus more topology options for environments and also the option to discovery new environments to go and scan those systems, identify environments and go and pull them into our system. We are going to continue extending our API support such that CI/CD actions will be a lot more simple, even more simple than they are today. Labels are a critical item, both for EaaS and the edge use cases. The idea behind that is the ability to set up your placement policy. Meaning, I would like to label my environments with specific tags and then i have these thousand sites that are out there, multiple environments that are out here and i would like to run a deployment update on everything that is currently within its maintenance window because it’s on the right time or on specific operating system, whatever I am tagging the system with now I can run badge actions based on those tags, and run everything in a smarter way and make sure that i am doing that and saving a lot time for my team. Of course we are going to extend our control for larger and more complex deployments, we are doing that with every release and we are going to write a big boost about that in our coming release, again, supporting a different order of magnitude of items in the system. So that is where we are heading and with that I am going to hand you over back to you Nati to sum things up and move us to the Q&A section.
(44:37, Nati) Excellent Ofer, just one note on what Ofer showed, there is a new getting started in the product itself, if you go there you can obviously see a lot of the things Ofer was mentioning, you can download the product, and try it out yourself. You can see that the getting started also includes templates for Terraform and for Ansible and Cloud Formation and Azure Arm so you are going to get the good feel of all of that using that getting started. So let’s move on to even further down the line, in terms of our vision. Where are we headed in general, and how we see the complexity being addressed in Cloudify and if you can, go to the net slide Ofer. Yah so this is kind of a drawing the road that is towards a kind of more general theme, when we look at simplifying multi cloud environments, one of the things that everyone here on the call probably has experienced is the time it takes to write automation templates, regardless of the tool, those tools might be simpler but overall it takes a lot of time to debug to make it work, to get an end to end automation. So obviously if we can skip that part that would be a huge gain and save a lot of time. So this is where we see discovery kind of coming in play and changing a lot of that paradigm and how we write things so then rather than focusing on writing automation templates, what if we generate it automatically from an environment and there are many use case on how you can do that, so today you can query every cloud and ask for the details of how Kubernetes is running of how VM is running and build a template to match to that resource automatically and by auto generate it we can apply that to also take your stack on Amazon, lets say why dont you create a similar stack, not the same by the way, a similar stack, on Azure. For example, I want Kubernetes. I want a Database as a service, I know what are the templates that will generate an equivalent stack on Azure why don’t we discover that on AWS and then generate a similar stack in Azure. And do things on that level. So a lot of those things can be automated, alot of those things don’t necessarily require hand writing of those templates. And we can really change the paradigm of automation completely with that direction and we are just beginning to scratch the surface in terms of the potential and the disruptive effect it will have on the automation landscape as we know it today. The other part that Ofer touched on is the placement policy. Placement, meaning that imagine that you have not just one Kubernetes cluster but you have fifty Kubernetes clusters that you have spread between different groups. One for developers, one for production, one for this department, one for this use case, one for this location, one for another location and now apply to other environments that you have in your organization for use cases. Right now you have to point your CI/CD or your users to those specific endpoints, whether its cloud or clusters. And that is pretty tedious. And similarly if you want a cost effective environment, once you stick to an environment, you are pretty much stuck with that. With placement policy we can completely change the way we abstract those environments and basically a developer only need to say, I want a dev environment for my machine learning example of my web application, were it actually physically located can be kind of obstructed from me similarly I can do the same thing for location, I can do the same thing for cost, give me an environment that is low on cost and not optimized for production and there is no limit almost to how much I can abstract how we do the binding between the workload and the actual environment that matches that workload, including public and private so instead of knowing explicitly what is private and what is public, I can basically just define the SLA, this needs to be running on a highly regulated environment and this will go to an environment that matches that requirement which is probably be more of a private environment but if that environment changes, I don’t have to change the developer experience around that. So that’s the placement policy. Everything is code and UI, one thing that you’ve noticed and its kind of under appreciated in my view, is that when we talk about templates and blueprints and YAML files and automation scripts and all those types of things all that maps automatically to UI item to a catalog item and right now in the industry those two paradigm are thought as two conflicting paradigm. You’re either using UI or using code, why don’t you have both and in our case the catalog is really connected to GIT so whenever you change a blueprint it becomes an item in a catalog and the item in the catalog becomes updated. So uses that not developers have direct access to run those templates using that self service experience and usually that developer have CI/CD to use those same templates so putting those two paradigm together they are considered conflicting is another paradigm change, in the way we can simplify that, if you like, experience and provide both simple and as code type of experience. So let’s go to Q&A.
(50:20) Great thanks Nati, for those tuning in a little late, my name is Jonny and I am responsible for communications here at Cloudify, and moderating this Webinar. So let’s go to the Q&A, ok cool so we have the first question, which I believe Nati answered in the chat already but I think it could be valuable for everyone in attendance. It is, what is the value compared to using Terraform as an abstraction layer?
Nait- that is a question by Marcello, thank you to Marcello for pointing that question, I think probably everyone is asking that question. And I broke down the answer into a couple of parts. The first one is that there is a big difference between having a provider within a template to call another service and a service orchestration. What is the difference? Services usually run independently by other teams may potentially run at different times and maintain separately. It’s not just how to map API calls into YAML files as a provider does, I think in many automation tools. In our case it’s for the plugin. It’s really a different way of thinking of how you create dependency between services, how you normalize the inputs and outputs, how you normalize the installation of those services. And then how do you layer workflows that talk to all those services and then run certain workflows that pass input parameters and do relationships between them. Now think also in a use case where you have that shared component that I mentioned. Now you need to be able to know which services are pointing to that, and are dependent on it. Obviously you don’t want to shut down a service that has a relationship to a database and then all the other services that are dependent on that database will be shut down. And that in my view requires a different paradigm, a different thinking, a different language. Specifically targeted for that. And the same way that we need a specific language for infrastructure as code, and a specific language for other things, we need a specific language for service orchestration and that I would say is number one. The second part that I think you’ve seen and hopefully appreciate is all the management experience that we’ve provided here. That gives you visibility, gives you all the workflow graphs that you’ve seen that Ofer showed you and an easy way to troubleshoot things aggregate logs, but not just for Terraform but obviously for other tools. So we can aggregate logs for Ansible, Azure Arm, for other types of services. In addition to that, and that’s the key important thing, when you are working with any automation tool, most of the automation tools work in the following way. Bring everything to my world and your world will be perfect. Meaning that the first step that you have to go through is transforming everything to a certain language of that tool and that in itself can be a very long process, and a very tedious task. In our case we are basically saying, if there is already an Azure Arm template for RDS or the equivalent RDS, I forgot how it is called, if there is a cloud formation template,and we have actually experienced that recently, in AWS for RDS why don’t I use that. Why do I have to move that into Terraform just because I want to use Terraform? I don’t need to get rid of all that, and I can save a lot of time by doing that. So its the management UI, it’s the consistency across those different automation tools and the specific service DSL that is very targeted for that, that deals with a lot of those things and its really scratching the surface in what we can do here. Hopefully that answers your question Marcello.
(54:00)(Jonny) Great stuff, ok couple more questions and then we’ll close the webinar. Ok so the next question I got was anonymously. I am deploying and managing multiple types of Kuberbetes clusters on the same cloud, but in different flavors. Is there anything that Cloudify can help me with, in this regard?
(Ofer) Yes ok so i’ll take this one. I think we have covered some of that already. But yes, absolutely. Setting up a cluster regardless of the infrastructure, setting up a kubernetes cluster is not a fun task. It is not as smooth as you would expect, even with the managed services and it does require some templatizing, if you want to do that in a proper way. So the first advantage is obviously setting it up in multiple locations and making sure that you have your setup in a consistent way, regardless of the platform that it is running. And then this is just the first phase because once you have that you now want to start orchestrating your workloads across these different Kubernetes clusters, you may have services that are spanning more than one platform, more than one infrastructure and Cloudify will help you do all of these configurations, set ups, tying up all the relevant connections and doing that across your different platforms in a single blueprint, meaning that you can include all of that in one single service that you want to get and even if that service spans multiple locations. If you are setting up a SDN or what be it, you can run that as just one blueprint and that will take care of everything across your location so simplification, making sure that you have consistency and making sure that you are in compliance with everything that you do, these are the bigger advantages.
(56:00) Awesome, thanks Ofer. One final question here. We are already using OpenShift on-prem to manage our deployments, what is the value for me to already add another platform here?
Excellent question as well, so the answer is pretty similar to what Ofer started to do, so there are a couple of things. One of them is hybrid stack meaning that i’m not running everything in open shift, i’m running open shift plus things that are maybe running on VMs and things that are running external to my Kubernetes cluster and my open shift cluster. The second part is i’m assuming that there is going to be more than one Kubernetes cluster in the organization. So if there is only open shift and you only worry about open shift then by all means just use open shift, you don’t need Cloudify. But the cases when you have either hybrid stack, meaning not just cloud native stack meaning containers and Kubernetes but also multi kubernetes clusters thats where things become interesting and the value of Cloudify becomes bigger. Think of the equivalent of Anthos in the concept of Kubernetes, meaning we are able to connect to many Kubernetes environments, OpenShift, EKS, AKS and now we have also StarlingX as one of those environments. And one of the things that are very important to a lot of users when we are talking to them is the ability to decouple, the ability to run the workload from the actual Kubernetes cluster, why? Because they realize there are going to be more than one Kubernetes clusters, they realize that EKS and AKS in Azure are going to be very popular and it’s something that they need to support. So if they tie all the workload to a certain Kubernetes cluster, to a set of tools, they’ve doomed themselves to lock in for a long time. And in the same way that we want to enable unlocking for workload across multiple clouds, that’s even more true to decouple that workload from the, if you like, kubernetes provider, and enabling that flexibility to run workloads across those different tools and not by yourself to a specific tool.
Ok good stuff, thanks so much Nati. Ok so this brings us to the close of this webinar session. Thanks to everyone, thanks to Nati, thanks to Ofer thanks to everyone for participating for your questions, fantastic session. Ok as I mentioned at the very beginning of this podcast, this has been recorded so everyone who has been listening will indeed receive the recording of the session and the supporting deck as well. For those of you who are aware, not aware, all attendees are given the shot at an Amazon gift card the winners will be selected at random and if you are a winner you will be notified within 48 hours so watch your inboxes. And very finally head to cloudify.co we have a wealth of resources there and if you like what you heard today, you might really like are techtalk podcast, so go to www.cloudify.co/podcast and we cover everything and the latest session was found on the 5.1 release as well. Ok at this point I shall wish you goodbye. Catch you next time.